Cray powers deep learning

At the 2016 Supercomputing Conference in Salt Lake City, Utah, Cray has unveiled new deep learning capabilities across its line of supercomputing and cluster systems. With validated deep learning toolkits and the most scalable supercomputing systems in the industry, Cray customers can now run deep learning workloads at their fullest potential – at scale on a Cray supercomputer.

  • 7 years ago Posted in
“The convergence of supercomputing and big data analytics is happening now, and the rise of deep learning algorithms is evidence of how customers are increasingly using high performance computing techniques to accelerate analytics applications,” said Steve Scott, senior vice president and chief technology officer at Cray. “Training problems look very much like classical supercomputing problems. We believe that with our Cray Programming Environment, validated toolkits, and the latest processing technologies, we have the right combination of hardware and software expertise to help our customers efficiently execute deep learning workloads now and in the future.”
 
Cray has validated and made available several deep learning toolkits on Cray® XC™ and Cray CS-Storm™ systems to simplify the transition to running deep learning workloads at scale. These toolkits include the Microsoft Cognitive Toolkit (previously CNTK), TensorFlow™, NVIDIA® DIGITS™ (Deep Learning GPU Training System), Caffe, Torch, and MXNet.
 
Additionally, the Cray CS-Storm system – a dense, accelerated GPU cluster supercomputer that offers 850 GPU teraflops in a single rack – now supports the NVIDIA Tesla® P100 for PCIe data center accelerator and the NVIDIA Tesla M40 deep learning training accelerator. And with the addition of the NVIDIA Tesla P100 to the Cray XC50™ supercomputer, Cray now has a variety of scalable systems well suited for running a wide array of emerging deep and machine learning applications.
 
PGS, a leading marine geophysical company, is running machine learning algorithms on its Cray XC40™ supercomputer, nicknamed “Abel.” Machine learning technologies such as regularization and steering can be applied to a significant computational problem in seismic exploration – Full Waveform Inversion (FWI), which is a methodology that seeks to find a high-resolution, high-fidelity representation of the subsurface in the ultra-deep Gulf of Mexico.
 
“This class of problems is notoriously hard,” said Dr. Sverre Brandsberg-Dahl, global chief geophysicist for Imaging and Engineering, at PGS. “It is a multidimensional ill-posed optimization problem that is far from automated and requires lots of skilled resources’ intervention – sometimes more art than science in many cases. Our Cray XC40 system was able to learn how to best steer refracted and diving waves for deep model updates and how best to reproduce the sharp salt boundaries in the Gulf of Mexico. Machine learning at scale on our Cray supercomputer showed dramatic improvement in the quality of the inversion process as compared to current state-of-the-art FWI.”
Atos and IQM have published the findings from the first global IDC study on the current status and...
With the ability to build a supercomputer in minutes, the platform promises to reduce the time and...
Market pressures and post-pandemic transformation initiatives are driving organizations to...
The most powerful & energy-efficient HPC system in Europe based on General Purpose CPUs?
Super Micro Computer is expanding its HPC market reach for a broad range of industries by...
Breakthrough HPC clustering solution and simplified programmability enable massive scale-out of...
NVIDIA has introduced NVIDIA Quantum-2, the next generation of its InfiniBand networking platform,...
Sulis supercomputer created by university consortium to empower engineering and physical sciences...