HPC computing clusters.
High-performance computing.
Clusters for ANSYS, Render farms

HPC clusters (HPC Cluster — High-performance computing cluster)) are used for computing purposes, in particular in scientific research and computing tasks. Computing cluster is an array of servers (computing nodes or so-called nodes) united by a communication network and located in a separate rack. The computing node has several multi-core processors, its own RAM and runs under its own operating system. The most common is the use of homogeneous clusters, where all nodes are absolutely identical in their architecture and performance.

An important stage is the cluster design, where the technical requirements for the computing cluster are its characteristics (performance, efficiency, scalability, etc.) In this case, in accordance with the technical requirements and additional restrictions (project budget), the calculation is made and the values ​​​​of the cluster hardware parameters are selected: selection of computing node parameters (bit depth, number of processors, memory size, cache size, etc.), the number of computing nodes, communication equipment characteristics, the control node and network parameters are selected. Our company offers turnkey design and supply of cluster computing systems for various purposes and performance levels.


Typical tasks for computing clusters:

  • solving problems of fluid and gas mechanics, heat transfer and heat exchange, electrodynamics, acoustics;
  • modeling of aerogasdynamic processes;
  • modeling of physical and chemical processes and reactions;
  • modeling of complex dynamic behavior of various mechanical systems;
  • solving problems of digital signal processing, financial analysis, various mathematical problems, visualization and presentation of data
  • analysis and calculation of static and dynamic strength;
  • gas dynamics, thermodynamics, thermal conductivity and radio frequency analysis;
  • modeling of problems of any degree of geometric complexity;
  • rendering of animation and photorealistic VFX effects
HPC_cluster.jpg

Computational modeling and analysis performed on an HPC cluster in some industries allows avoiding expensive and lengthy development cycles such as "design-build-test".



Computational cluster. Main components

The basis of the computing cluster infrastructure is a set or pool of servers called computing cluster nodes (nodes) united by a single communication network.


Cluster Computing Node

A computing node is a multi-processor, multi-core computer on which user tasks are performed. This is the basis of a computing cluster; its choice will determine the entire performance and the possibility of further expansion. The performance of a computing node is determined by the clock rate, generation, and number of cores of the processors used. At the same time, the number of cores is not always a priority. It is recommended to use RAM in any software with error correction at the rate of 4-8 GB per processor core. A user task can occupy one computing node, several computing nodes, or all computing nodes. Simultaneous execution of several tasks on one processor core of a computing node is not allowed (cluster computing resources are divided between tasks with an accuracy of up to a processor core).

HPC_server_node.jpg


Control node 

Control node is a high-performance server that can combine several functions: scheduler, front-end, monitoring, etc. To coordinate tasks performed on different nodes, a scheduler is used. The scheduler identifies available resources, assigns and distributes tasks, and monitors the overall status of task execution. Monitoring the components of the hardware and software complex and exerting control over it are critical for organizing high-performance distributed computing. The user and the cluster administrator need information about how the task is being performed, what impact it has on the computing system as a whole. When designing clusters, it is worth considering that the head node, which is the control node, actively participates in performing calculations in a number of tasks. This node must be equipped with all interconnects on par with the computing nodes - it is the first calculator in the list of resources of the task queue scheduler. Using high-speed SSD disks or arrays based on them can reduce the calculation time several times. The head node/node for setting the task for calculation should have 2-3 times more RAM than the other nodes of the cluster.

hpc_head_node.jpg


GPU Compute Node

GPU-accelerated computing offers unprecedented application performance because the GPU handles the compute-intensive parts of the application while the rest of the application runs on the CPU. From a user perspective, the application simply runs significantly faster. A simple way to understand the difference between a CPU and a GPU is to compare how they perform tasks. A CPU is made up of multiple cores optimized for sequential processing of data, while a GPU (NVIDIA TESLA) is made up of thousands of smaller, more power-efficient cores designed to handle multiple tasks simultaneously. The full range of GPU servers is available in the High Performance Computing section.

вычислительный сервер GPU


Interconnect 

Mellanox switching solutions improve the efficiency of data centers due to their high throughput and low latency. These factors lead to faster delivery of data to applications and a more complete disclosure of the performance potential of systems. This type of interconnect not only guarantees maximum performance indicators, but also ensures the full hardware operation of the network. Thus, even intensive data exchange over the interconnect network does not lead to the loading of the processors of computing servers.

mellanox_hpc.jpg


Data storage system

All calculation results, as well as the array of intermediate data obtained during calculations, must be stored somewhere and you cannot do without an effective, reliable and productive Data Storage System. In principle, the storage system can be combined with the control server, or connected as an external device with a fast interconnect such as Fibre Channel, SAS. The storage volume can be increased as needed using solid-state drives SSD or regular SATA drives.

HPC_storage.jpg


UPS and redundancy

An uninterruptible power supply is a mandatory part of building a cluster. Important information for you can be lost as a result of problems in the power supply chain. UPSs include software that will automatically and safely turn off the HPC cluster, even if you are not nearby at the time when an unplanned power outage occurs. Power redundancy is no less important, computing nodes, switches, have the ability to install additional power supplies. We do not recommend saving on stable and safe operation.

APC_hpc.jpg


Send request

First and Last Name:*

Job title:*

Organization:*

Phone:*

E-Mail:*

Additional information:


By clicking the "Submit" button, you confirm your consent to the processing of personal data