The Thermal Management Requirements in Edge Computing Environments

3 April, 2018


As the growth of the Internet of Things (IoT) and similar distributed, data-based technologies continue to gather pace, edge computing is set to become an integral part of information management and the wider infrastructure of the new connected world. The reason for this is that it enables data to be processed closer to the source of creation, instead of sending it across long routes to data centers or cloud services.

The growing interest in edge computing is primarily driven by a demand to evaluate and analyze important data as near to real-time as possible in an increasing variety of applications. This requires significantly lower latency than can be achieved in the typical centralized data center and cloud architectures where processing power is many miles away from the user.

Moving computing power to the edge of the network overcomes this issue and enables far more responsive data applications – whether that involves smartphones, sensor networks or industrial machinery.

The shorter geographic distance between computing power and the user opens up new potential applications for businesses which rely on rapid response rates. For services that need to respond instantly, it’s better for data to be stored, or processed locally at or near to the device level.

But, while the technology is threatening to disrupt colocation and cloud computing, it’s also creating some specific challenges for organizations when it comes to the design of edge servers. The crucial issue is thermal management.

For any server, thermal design is important when it comes to ensuring reliability and energy efficiency. However, typical server design has been able to rely on the fact that the devices will be housed in large racks, with powerful cooling fans, in purpose-built data center facilities with sophisticated climate control systems.

In comparison, edge computing servers need to be significantly smaller – restricting natural airflow through the device – and there are also challenges in terms of the external environment. When edge servers are deployed in offices, in factories, under desks, inside street furniture, or alongside pre-existing IT systems you simply can’t assume there will be an optimal environment for heat dissipation.

The other key challenge is that the nature of the tasks edge servers will be used for – real-time, high performance, low latency – means that downtime will simply not be tolerable. As a result, edge servers need to be bulletproof and any risks from overheating have to be mitigated at all costs. And with internal space being at a premium in smaller servers, the margin for error is slim.

For thermal engineers then, it’s important to get the designs right the first time. The best way to do this is by using thermal simulation software. When it comes to edge computing, engineers need to prioritize cooling measures and build in tolerances for thermal issues. This might mean redesigning the layout, selecting different materials or choosing different components. Either way, thermal engineers need to be confident that their device will perform in challenging circumstances.

By using simulation software, thermal engineers can test the heat-flow of their design prototypes prior to build. This allows them to visualize how the heat will flow inside the server, how it might affect components and to amend their designs accordingly.

As a result, thermal simulation software, and the access to accurate and actionable test data is integral to solving some of the thermal challenges posed by edge computing. 


By: Tom Gregory, Product Manager