Networking in the data center: The modern network is flexible
The demands on data centers are currently increasing enormously. New applications in the fields of artificial intelligence (AI), deep learning and high-performance computing (HPC) are emerging around the world. Because companies are using AI in more and more areas. For example, AI systems enable marketing to be personalized or customer support through virtual assistants; they form the basis for recommendation systems and facial recognition. Deep learning applications search for patterns in huge data pools consisting of millions of videos, photos, audio files and documents. In recent years, HPC has helped weather forecasts and flow simulations in aircraft and vehicle construction to achieve previously unimagined precision.
These new tasks can no longer be mastered with conventional applications that are set up on a single computer system. They have been replaced by distributed applications that spread across multiple hardware instances and device classes in the data center. This in turn requires a changed organization of the data centers: Your infrastructure today has to be flexible and adapt to the structure of the applications. CPUs, GPUs specializing in AI, RAM and storage have to connect to one another in ever new configurations and at different performance levels. At the same time, however, this means a significantly increased management effort for those responsible in the data centers. The additional work can be managed through increased automation and the central control of resources in the data center.
But that’s not all: data centers have to be equipped with high-speed networks to connect the various components that are interconnected for a distributed application. Otherwise there is the effect that, for example, the CPUs and GPUs used offer high computing power. However, since the data cannot be transmitted at sufficient speed due to the poor performance of the network connections, the waiting times regularly result in idle phases. CPUs and GPUs could work much more effectively in this scenario, but they always lack data replenishment.
Nvidia’s intelligent network hardware comes in to help data centers meet these challenges. It is based on technology from Mellanox, an Israeli-American manufacturer of network components that was acquired by Nvidia in 2020. Since June 2021 is PNY Technologies Distributor of Infiniband and Ethernet switches, adapters and cables from Nvidia in the EMEA region.
Thanks to Mellanox’s many years of experience, Nvidia is now one of the world’s leading manufacturers in the Infiniband sector. Infiniband technology is aimed primarily at data center operators and cloud providers, but also at companies that have particularly high requirements for data transmission between their servers or in the network. Products such as the switches of the SB7800, CS7500 and SB7780 / 7880 series from Nvidia achieve transfer rates of up to 400 Gbit / s and are characterized by low latency times.
The same applies to the Infiniband network adapters of the ConnectX series from Nvidia, they too have performance rates of up to 400 Gbit / s. The combined use of these switches and adapters ensures that there are no bottlenecks in data transmission and that there are no idle phases.
However, Nvidia does not rely exclusively on Infiniband technology, but also offers its customers parallel Ethernet products that are optimized for use in the data center. These include the switches from the Mellanox Spectrum series SN2000, SN3000 and SN4000 as well as the models from the Nvidia AS5812 series. In terms of speed, they are not inferior to the Infiniband switches and also forward the data at up to 400 Gbit / s in the network.
In conjunction with the Mellanox LinkX cables, the Infiniband and Ethernet switches from Nvidia / Mellanox create an effective data center infrastructure that intelligently adapts to changing applications and requirements and ensures that applications are faster with high performance and low latency and run more effectively.