Photo by Taylor Vick on Unsplash.

How Does Edge Computing Reduce Latency?

May 24, 2022 - Ellie Gabel

Revolutionized is reader-supported. When you buy through links on our site, we may earn an affiliate commision. Learn more here.

The number of internet-connected devices is on track to almost double between now and 2030. Smartphones, laptops, smart devices, autonomous vehicles, and industrial IoT equipment are all contributing to this explosive growth. As the number of internet-connected devices grows, however, many end users are finding that latency is becoming a major issue. It’s especially important to reduce latency for consumers that live in densely populated areas and businesses that maintain large fleets of IoT devices.

Edge computing is one way to reduce latency. Implementation of edge computing can vary from business to business, most approaches use the same principle — decentralized data centers called “edge nodes” that are close to end-users and take pressure off of the cloud while reducing latency.

How to Reduce Latency with Edge Computing

Many internet-connected smart and IoT devices rely on the cloud to function. As the device gathers data, it may need to store or analyze certain information. 

For example, a smart thermostat may analyze patterns of energy use and building temperature data to determine the optimal settings for a home.

In an industrial setting, a manufacturing business may use IoT devices to analyze equipment performance, allowing the business to optimize equipment settings or see failure coming.

These individual IoT and smart devices often don’t have enough processing power or storage to perform all this analysis on-device. 

Instead, they use their internet connection to send some of the work to the cloud.

The cloud can handle all of this work, but it is far away from devices on the “edge” of a network. In some cases, an edge device may be thousands of miles away from the data center where analysis of edge device data will happen. 

As a result, smart devices may have to wait for information to be sent to the cloud. Then, they’ll need to wait for cloud data processing, then again for analysis to be sent back to end devices from the cloud. 

This latency, or delay, usually isn’t massive. Often, the delay is just a few milliseconds. Over time, however, even these short delays can significantly reduce the performance of smart devices.

Processing Data on the Network’s Edge

Edge computing moves some of the work that typically takes place in the cloud to the “edge” of the network, closer to end-users and their devices

In practice, edge computing often relies on solutions like decentralized data centers, called edge nodes, to handle some of the work normally done on the cloud. Operators of edge nodes may be managed service providers or the owners of the edge devices that use the node.

In addition to leveraging edge nodes, edge computing also typically moves some computing work to edge devices themselves.

Edge computing systems must determine which information needs cloud processing and which information can be handled locally, either on edge devices or in an edge node. Typically, higher-priority work will stay local, which will reduce latency on important analysis. 

Tasks that require greater amounts of processing power or storage — or work that isn’t a top priority — will often be sent to the cloud for processing.

While most businesses continue to rely on cloud computing for edge device data processing, edge computing is becoming increasingly popular.

According to Gartner, around 10% of enterprise-generated data was processed outside a traditional centralized data center or cloud. The firm predicts that this fraction will grow to 75% by 2025.

As responsive or real-time data processing becomes more important, end-users of edge devices may find that cloud processing alone can’t meet their needs. Edge computing may emerge as one of the best solutions for the problem of latency in internet-connected edge devices.

Edge Computing vs. Fog Computing

The term edge computing is sometimes used interchangeably with fog computing. Some experts, however, distinguish between fog and edge computing. 

If so, fog computing refers to a compute layer between the cloud and edge network. This layer analyzes information from edge devices, then determines how it should store the data or analyze it further. 

For example, imagine an IoT sensor on the edge of the network that captures temperature information for a room every single second. 

This sensor continuously transmits information to the fog layer. This layer determines if the information is relevant or irrelevant. 

The fog layer may send relevant information to the cloud for long-term storage or analysis. Irrelevant information may be deleted or analyzed by the fog layer and used to power remote monitoring solutions or localized learning models.

Writers sometimes use the terms edge computing or fog computing in combination with the “edge cloud,” another similar-sounding concept. 

With the edge cloud, businesses and end-users store data on edge devices, in addition to storing information on the cloud. This practice can make captured information more accessible to local business systems and other edge devices.

Why is Latency an Important Challenge for End-Users?

For some users of smart devices, latency isn’t a serious issue — at worst, delay in data processing may mean some lag or hiccups in the behavior of devices like smart thermostats.

However, latency can have a life-or-death impact on performance for some devices. 

Many autonomous cars, for example, rely on regular communication with the cloud to power their self-driving and machine vision algorithms. Even slight latency can significantly impact the performance of these algorithms. 

For a device as important as a self-driving car, a loss of performance can quickly cause serious problems — primarily inconsistent behavior and glitches that may lead to dangerous driving or even accidents.

Autonomous cars aren’t the only devices that require a near-real-time connection to the cloud. Surgery robots, for example, are becoming more popular in the medical industry. These robots enable telesurgery, remote surgery that allows a surgeon to operate on a patient who may be thousands of miles away. 

These robots can help make surgery much more accessible — for example, allowing a patient in a rural area or with limited mobility to access specialist surgeons without having to travel miles to a hospital with the right equipment and staff.

Some surgery robots also assist surgeons, making possible new types of minimally invasive surgery. These robots can help to reduce the impact of surgery on a patient, potentially accelerating recovery and minimizing pain.

As with autonomous cars, latency can be a serious problem for these robots. Delays in processing may cause unusual behavior that can make surgery with robotics more difficult.

Even when latency isn’t an issue of life or death, it can still have a significant negative impact on the performance of certain systems.

Technology That Can Support Edge Computing

Many businesses that adopt edge computing to support their IoT and smart devices also leverage other technologies to improve latency and device connectivity.

Technology like 5G can help with this latency. 5G includes a few different networking innovations that make 5G towers better at handling many simultaneous connections. 

As a result, using a 5G modem in smart devices can reduce latency for edge devices — especially in areas where there are large numbers of internet-connected devices. 

How Edge Computing May Evolve

As the number of internet-connected devices increases around the world, solutions like edge computing are likely to become a lot more important. At the same time, they’ll also probably become more sophisticated. 

Innovations in edge computing may help reduce latency more effectively and improve connectivity for smart devices.

Companies are currently hard at work developing edge devices that have the processing power necessary to handle complex analysis while remaining compact.

Managed service providers around the country are creating new edge nodes, which will provide the infrastructure that edge computing will need to scale.

Future innovations in 5G networking technology may also help make edge devices even faster, minimizing some of the current barriers to real-time analysis and monitoring at the edge.

Edge Computing Helps End Users Reduce Device Latency

Both businesses and consumers rely on devices at the edge of the network. These devices don’t always have the computing power or storage necessary to function properly, however. 

Sending work to the cloud for processing is an option, but high latency can create new problems.

Edge computing can reduce latency by moving processing power closer to the network’s edge. Already, the technology is helping to make important edge devices — like IoT sensors — more useful. 

In the near future, advancements in edge computing may mean that processing will shift from the cloud back to the network’s edge.

Revolutionized is reader-supported. When you buy through links on our site, we may earn an affiliate commision. Learn more here.

Author

Ellie Gabel

Ellie Gabel is a science writer specializing in astronomy and environmental science and is the Associate Editor of Revolutionized. Ellie's love of science stems from reading Richard Dawkins books and her favorite science magazines as a child, where she fell in love with the experiments included in each edition.

Leave a Comment