User login
Get to know more about Qlustar!

Qlustar MPI

Qlustar MPI

To achieve scalable compute performance on an HPC cluster, most applications use MPI parallelization. Qlustar MPI provides optimized Debian packages of OpenMPI, the most widely used open-source MPI implementation.

Our packages are created such that different major OpenMPI versions compiled with different compilers (gcc, icc, etc.) can be installed in parallel. Hence, users have a choice of MPI versions to use for their programs and have a fall-back in case an updated version doesn't work with them initially. Due to the way the packages are created, applications compiled with Qlustar OpenMPI will always automatically and correctly resolve all system and MPI library dependencies without setting environment variables. This assures that old binaries will continue to run even after system updates, as long as the packages used to build them remain installed.

Qlustar MPI Features

The MPI variants provided by Qlustar are optimally integrated with other HPC components like workload managers, GPU toolkits and check-pointing software. In particular, they feature:

  • Integration with the Slurm workload manager.
  • Integration with the Torque workload manager.
  • Integration with Nvidia CUDA to support GPU computing.
  • Full-featured support for Infiniband networks.
Qlustar MPI Features
Qlustar MPI Performance

Qlustar MPI Performance

  • Due to its ready-to-use support for modern IB network technology, Qlustar MPI achieves highest throughput as well as lowest message latency to the extent supported by the cluster IB hardware in use.
  • The compiled-in OpenMPI CUDA-aware feature allows sending and receiving CUDA GPU buffer memory directly without staging them through host memory. This results in significant performance improvements for applications combining GPU and distributed MPI computing.
  • Starting from Qlustar OpenMPI 1.10.x, GPUDirect support is enabled on Mellanox IB cards. This allows to directly read and write CUDA host and device memory by the IB adapter, eliminating unnecessary memory copies, hence dramatically lowering CPU overhead, and reducing latency.
  • Mellanox HPC-X performance libraries (Fabric Collectives and Messaging Accelerator) are fully integrated into Qlustar OpenMPI starting from version 1.10.x. This results in highly improved performance enhancements for applications with certain communication patterns like e.g. the OpenFOAM CFD solvers. Qlustar is the only Linux distribution with out-of-the-box support for HPC-X technology due to a partnership with Mellanox Technologies.

CUDA and GPUDirect are registered trademarks of NVIDIA Corporation in the U.S. and/or other countries.
HPC-X is a registered trademark of Mellanox Technologies.

glqxz9283 sfy39587stf02 mnesdcuix8