Temperatures are changing at rapid rates, from record breaking heatwaves, to extreme floods, and freezing conditions. Climate change is impacting homes, businesses and people across the world, and it will also affect the way we design, build and operate our data centres. So let’s look at the risks of climate change on data centres in more detail, and most importantly what should we be doing as an industry to overcome this increasing threat.
To put this into context, the world is increasingly reliant on data centres for critical operations, whether it’s powering the internet, processing financial transactions or running the latest blockbuster game, data centres are essential for many aspects of our lives.
So when, on the 10th July, Amazon in London declared a ‘thermal event’ had caused a power outage, and the following week Cloud services and servers hosted by Google and Oracle in the UK dropped offline due to cooling issues, we should all have taken note. This are also rumours that some NHS Trusts had also been hit and 2 weeks later are still operating on paper-based system as they are still trying to get servers up and running after a total melt down.
Desperate times call for desperate measures and you may have read about London users on rooftops spraying cooling systems with hoses to help keep the temperatures under control. Whilst this may work in the short term it is clearly not a long-term solution and may well shorten the life of the equipment. So, what can we do?
Our industry has been aware of the risk of rising temperatures for some time, but with global temperatures now clearly on the rise, the issue is becoming more pressing. However, it is worth remembering that many data centres have been designed with a level of inbuilt security and resilience, and many of them operate below the designed capacity. So, from a power and cooling perspective there is some comfort as often a proportion of the heightened conditions can be absorbed as a drop in performance will still be able to satisfy the required demand.
Designers have typically used N=20, which is the most extreme weather conditions in the local area over the last 20 years. But with such big jumps in the maximum, 38.7 to 40.3 degrees in the UK in only three years, perhaps this needs to be reconsidered. Easily said, but with pressure to meet net zero sustainability goals simply upping the cooling spec is unlikely to be sustainable. We must develop some smarter ways of dealing with the extreme, but short term, weather conditions.
For example, owners and operators can look at developing strategies around short-term load reduction by turning off specific hardware to reduce the overall strain on the cooling systems. It was recently reported that Oracle and Google disconnected ‘non-essential’ systems to reduce the overall load and successfully maintain supply, although this did leave some users losing access to services in the short term. With planning and foresight this is a strategy that could help in the short to medium term with limited impact on key users, especially with early warnings and appropriate expectation management.
Having a robust BMS which provides historical data and analytics is key to providing the required intelligence, enabling data led decisions on what changes can be made and when to make them, and helping to successfully manage services through the extreme conditions. In these circumstances at Keysource we find our remote monitoring tools to be invaluable for our customers as we can monitor live conditions and use this information to make preventative decisions based on real-time operational data – way before it has an impact on service and up-time. With DCIM platforms like EcoStruxure there is also the ability to link the IT with the live data, to automate the strategy.
Longer term it is not only cooling that is the risk, the power and water that enables cooling also need strategies for the future. Recently, we saw the highest recorded cost of energy, 5000% higher than the normal price, imported from Belgium. This should raise the question of how we power our data centres, where we locate them and whether a strategy of leveraging resilience and reliance across geographically diverse facilities will reduce embodied carbon, reduce inefficiencies and overspecification and, importantly reduce the risk of future outages.
Prolonged periods of extreme weather could also lead to droughts, with the Met Office figures showing the driest July for 111 years, and some data centre designs rely heavily on the use of water, increasing pressure on the local communities and the water supply companies. For example one Google data centre is said to have a guaranteed supply of 1 million gallons of water per day for cooling!
So we need to consider whether, in the future, will we see more moratoria on Data Centres based on the availability of water, and could drought conditions lead to an outage for some existing facilities? This is already being seen across some States in the US.
Whilst some data centres are partially prepared, it is clear that the impact of hotter weather on data centres can be significant, from increased cooling costs to potentially damaging expensive infrastructure so short, medium and longer-term strategies are required.
At Keysource we are helping customers by proactively supporting the development of strategies to enact during extreme conditions. This includes putting in place early warning systems, tools, services and suggestions on how to reduce the risk to existing facilities as these conditions become more frequent. We undertake best practise assessments and health checks on facilities with the aim of reducing the risk to ageing infrastructure, adapting to future demands and improving energy efficiency.
Longer-term we are seeing the new data centres designs that we are involved in already considering the implications of continued climate change and will be taking a more practical approach, which we suspect will feature in standards updates moving forward. It’s all change but can you stand the heat?…