Nvidia launches A100 80GB GPU for supercomputers

Nvidia launched its 80GB version of the A100 graphics processing unit (GPU), targeting the graphics and AI chip at supercomputing applications.

The chip is based on the company’s Ampere graphics architecture and is aimed at helping businesses and government labs make key decisions more quickly by enabling better real-time data analysis. Nvidia made the announcement at the outset of the SC20 supercomputing conference this week.

The 80GB version has twice the memory of its predecessor, which was introduced six months ago.

“We’ve doubled everything in this system to make it more effective for customers,” Nvidia executive Paresh Kharya said in a press briefing.

The new chip provides researchers and engineers with more speed and performance for their AI and scientific applications. It delivers more than 2 terabytes per second of memory bandwidth, which enables a system to feed data more quickly the GPU.

“Supercomputing has changed in profound ways, expanding from being just focused on simulations to AI supercomputing with data-driven approaches that are now complementing traditional simulations,” Kharya said.

He added that Nvidia’s end-to-end approach to supercomputing, from workflows for simulation to AI, is necessary to keep making advances. Kharya said that Nvidia now has 2.3 million developers for its various platforms and supercomputing is important for the leading edge of those developers.

He noted that a recent simulation of the coronavirus simulated 305 million atoms. He said the simulation, which used 27,000 Nvidia GPUs, was the largest molecular simulation ever completed.

The Nvidia A100 80GB GPU is available in the Nvidia DGX A100 and Nvidia DGX Station A100 systems that are expected to ship this quarter.

Computer makers Atos, Dell Technologies, Fujitsu, Gigabyte, Hewlett Packard Enterprise, Inspur, Lenovo, Quanta, and Supermicro will offer four-GPU or eight-GPU systems based on the new A100 80GB GPU in the first half of 2021.

Nvidia’s new chip will compete with the new AMD Instinct MI100 GPU accelerator Advanced Micro Devices announced today. In contrast to AMD, Nvidia has a single GPU architecture for both AI and graphics.

Moor Insights & Strategy analyst Karl Freund said in an email to VentureBeat that the AMD GPU can provide 18% better performance than the original 40GB A100 from Nvidia. But he said real applications may benefit from the 80GB Nvidia version. He also said that while price-sensitive customers may favor AMD, he doesn’t think AMD can take on Nvidia when it comes to AI performance.

“In AI, Nvidia raised the bar yet again, and I do not see any competitors who can clear that hurdle,” Freund said.

For AI training, recommender system models like DLRM have massive tables representing billions of users and billions of products. A100 80GB delivers up to a 3 times speedup, so businesses can quickly retrain these models to deliver highly accurate recommendations.

The A100 80GB also enables training of the largest models with more parameters fitting within a single HGX powered server, such as GPT-2, a natural language processing model with superhuman generative text capability.

This eliminates the need for data or model parallel architectures that can be time-consuming to implement and slow to run across multiple nodes, Nvidia said.

With its multi-instance GPU (MIG) technology, A100 can be partitioned into up to seven GPU instances, each with 10GB of memory. This provides secure hardware isolation and maximizes GPU utilization for a variety of smaller workloads.

The A100 80GB can deliver acceleration for scientific applications, such as weather forecasting and quantum chemistry. Quantum Espresso, a materials simulation, achieved throughput gains of nearly double with a single node of A100 80GB.

New systems for the GPU

Above: Nvidia’s DGX Station A100 supercomputer.

Meanwhile, Nvidia announced the second generation of the AI computing system dubbed Nvidia DGX Station A100, which the company calls a datacenter in a box. The box delivers 2.5 petaflops of AI performance, with four A100 Tensor Core GPUs. All told, it has up to 320GB of GPU memory.

Nvidia VP Charlie Boyle said in a press briefing that the system provides up to 28 different GPU instances to run parallel jobs.

“This is like a supercomputer under your desk,” Boyle said.

Customers using the DGX Station platform extend across education, financial services, government, health care, and retail. They include BMW Group, Germany’s DFKI AI research center, Lockheed Martin, NTT Docomo, and the Pacific Northwest National Laboratory. The Nvidia DGX Station A100 and Nvidia DGX A100 640GB systems will be available this quarter.

Mellanox networking

Above: Mellanox’s tech will enable faster networking for everything from supercomputers to self-driving cars.

Lastly, Nvidia announced Mellanox 400G Infiniband networking for exascale AI supercomputers. It’s the seventh generation of Mellanox InfiniBand technology, with data moving at 400 gigabits per second, compared to the first generation at 10 gigabits per second. Nvidia bought Mellanox for $6.8 billion in 2019.

Infrastructure manufacturers such as Atos, Dell Technologies, Fujitsu, Inspur, Lenovo, and Supermicro plan to integrate the product into their lineups. The InfiniBand tech provides throughput for networking of 1.64 petabits per second, or 5 times higher than the last generation. Mellanox’s tech will enable faster networking for everything from supercomputers to self-driving cars, Nvidia senior VP Gilad Shainer said in a press briefing.

Kharya said 90% of the world’s data was created in the last two years.

Source: Read Full Article