Orange integrates Augtera Network AI platform to its NOC tools to leverage AI/ML in daily network operations. This will reduce Network Operation Centers alarms by 70% and prevent failures.

The Hidden Costs of GPU Downtime: Why Proactively Monitoring Your Ethernet Fabric is Essential for Training Large Language Models

In today’s digital era, training large language models using a multitude of GPUs in a distributed manner has become commonplace. Yet, the proper monitoring of an Ethernet Data Center fabric often goes unnoticed, and the implications of this oversight can be costly. In this blog post, we will delve into the financial repercussions of downtime, explore scenarios that can delay the model training process, and ultimately highlight the undeniable cost benefits of proactive infrastructure monitoring.

The Expensive Nature of GPU Downtime

  • Understanding GPU Costs:

Rental model: A single high-end GPU’s market rental (NVIDIA H100) rate hovers around $5/hour. For large language model training, which often requires hundreds or thousands of GPUs working simultaneously, the cost can skyrocket to roughly $100K/hour or more.

Acquisition model: Some organizations will take the route of a CAPEX model by acquiring and building their own GPU cluster. An NVIDIA DGX H100 system (8xGPUs) can cost somewhere around $300K. If we assume a depreciation of the assets of 3 years, this results in an “hourly amortization” of $1.43/hour per running GPU. Historically running a Data Center infrastructure resulted in an Opex-to-Capex ratio between 1 and 3. Let’s consider 2 for this exercise, which means the Opex per hour per running GPU would be around $2.86. TCO per hour in the acquisition model is therefore around $4.3.

For the purposes of the evaluation of the impact of downtime, we will assume both scenarios are similar in terms of total cost per hour, around $5/hour.

  • Downtime’s Direct Financial Impact: When considering even a small fraction of an hour of downtime, businesses can incur losses amounting to thousands of dollars.

How Downtime Impedes the Training Process

  • Network Discontinuity: Disruptions in network continuity mean a halt in training, sometimes requiring the process to revert to previous checkpoints. If a failure is not detected on-time, a substantial cost may be incurred associated to both, the time since the failure happened and the issue was detected and fixed, plus the time since the previous model checkpoint saving happened, which represents the previous known good state from which the training may be resumed once the network continuity is recovered. This period may represent hours or days, which could represent a cost of hundreds of thousands to millions of dollars.
  • Packet Loss & Data Corruption: These issues can cause models to train on incomplete or incorrect data, compromising the quality and accuracy of the output. Packet loss ultimately will lead to packet retransmission and will result in a longer time in transferring the required model training state from GPU to GPU, extending the total training time. If this situation goes unnoticed, this could add extra hours or days to the total training time of a large model, again, resulting in hundreds of thousands of dollars additional to the training bill. It is important to realize that the overall system performance is going to be influenced by the longer-lasting flows rather than by the faster flows. It is the long tail that determines the system performance.
  • Congestion: Network congestion can slow down the data flow, resulting in extended training durations and consequently higher costs. While congestion may not result in packet loss necessarily, it will certainly slow down the process as a consequence of the PFC and ECN signals. Priority Flow Control (FPC) and Explicit Congestion Notification (ECN) will result in the sources of data (GPUs) slowing down their transmission rates (primarily via transmission window reduction and packet queuing) to prevent congestion from building up further. The overall impact will be longer flows and longer training times. Consequently, additional training time will be added, and therefore costs.

The Domino Effect of Delays

  • Extended Training Time: Delays, even if minor, can cumulatively push the training timeline. For large models where training spans days or weeks, delays can translate into hefty unexpected costs.
  • Fail fast, retrain: Models may need to be trained multiple times. The training of a model is a very experimental process and the results may not be as expected. That means, it is important to optimize the training time so that if a process needs to be repeated, the total time invested is minimized. Any interference that delays the training will have a financial impact on the overall ROI of the model.
  • Missed Opportunities: Time-sensitive projects or models meant to address immediate needs can lose their value if not trained and deployed promptly.

Proactive Monitoring: An Investment That Pays Off

  • Long-Tail latency anomaly detection: Modern Network AIOps tools like Augtera can detect and alert about potential latency anomalies in real-time, preventing the cascading effect of small issues turning into major disruptions.
  • Swift Response: With immediate alerts, teams can act quickly, ensuring that issues are resolved before they can impact the training process.
  • Insights and Troubleshooting: AIOps tools offer deep insights into the network’s health, allowing teams to make informed decisions and implement preventive measures. For instance flow completion time analytics can help detect outliers that may be causing large job completion times. Sometimes this will result in detecting software or hardware issues that are beginning to cause traffic polarization and may otherwise may go unnoticed.
  • ROI of Monitoring: The upfront investment in robust observability and AIOps tools is dwarfed by the potential losses incurred due to unscheduled downtime.

The Bottom Line: Prevention Over Cure

It’s clear that in the world of training large language models, time is money. The chain reaction of downtime not only inflates costs but also extends deliverable timelines and compromises model quality. Proactively monitoring your Ethernet Data Center fabric isn’t just a best practice—it’s a business imperative. By doing so, businesses can ensure optimal performance, timely deliverables, and most importantly, substantial cost savings in the long run.

In Conclusion

With the increasing reliance on large language models and the growing importance of AI in shaping our digital future, businesses cannot afford to overlook the importance of proper monitoring. Embracing a proactive approach enabled with Augtera’s AI is the first step towards efficiency, cost savings, and ensuring that the vast potential of AI is fully realized without unnecessary hindrances.