Emerging technologies are driving the need for faster server speeds in data centres, and the recent developments in artificial intelligence are setting new expectations.
Artificial intelligence, machine learning, and edge computing are increasingly being adopted by enterprise businesses, raising the demand for faster data centre server systems. Current server speeds of 10 and 25 Gb/s may suffice for mobile applications and high-resolution video content, but newer tasks like text-based generative AI and machine learning now require speeds of 100 Gb/s and beyond.
Recent advances in AI have significantly increased network bandwidth and speed expectations. Graphical processing units (GPUs) now enable server systems to analyse and train video models in just weeks. With server connectivity speeds ranging from 200 Gb/s to 400 Gb/s and 800 Gb/s on the horizon, the demands on data centre infrastructure are escalating.
Network engineers face the challenge of balancing cost and performance. Numerous cabling options can facilitate faster switch-to-server connections. Short-reach, high-speed cable assemblies are particularly effective in maximising data centre budgets. These assemblies support high-speed connectivity, better power efficiencies, and lower-latency data transmission, which are essential for emerging applications.
Top-of-rack (ToR) and end-of-row (EoR) designs offer different advantages for data centre deployments. ToR is ideal for rapid deployment and easy data centre expansion, with simplified cable management and troubleshooting. However, it requires managing more switches. EoR, conversely, uses fewer access layer switches, simplifying system updates but increasing complexity and space requirements. Choosing the right design depends on the specific needs and constraints of the data centre.
High-speed server connections come in various forms, such as direct attach copper cables (DAC), active optical cables (AOC), and transceiver assemblies, supporting transmission speeds from 10 Gb/s to 800 Gb/s. DACs are suitable for in-rack connections with minimal power consumption, while AOCs support longer lengths at higher speeds. Transceivers, though more expensive, can cover extensive distances and leverage existing cabling infrastructure.
As server density increases due to the power consumption of new GPUs, advanced cooling methods are becoming necessary. Technologies like Nvidia’s Blackwell GPUs are pushing the envelope of what’s possible in rack scale design. With AI GPUs and server designs becoming denser, ToR rack-scale server systems meet current needs while preparing for future demands. However, EoR systems are still deployed where power and infrastructure limitations exist, and hybrid cloud approaches help balance resources efficiently.
Selecting the appropriate cable assemblies for next-generation network topologies is crucial. An agile deployment model that utilises point-to-point high-speed cable assemblies can significantly benefit network managers. Looking ahead, staying nimble and making informed choices, such as working with trusted cable assembly manufacturers, will be vital for future-proofing data centres.
In an era of rapid technological advancement, data centres must adapt to increasing demands for faster server speeds. By carefully selecting the right cabling and deployment strategies, businesses can ensure they meet current and future needs efficiently.