A Physical Architecture for Edge Processing
it is been a common problem for years: if a device generates large amounts
data that needs to be processed instantly, then moving that data to a centralized database each time introduces latency. The IoT has similar issues of processing performance versus latency.
Consider, for e xample, a machine that analyzes the quality of an auto part
during the manufacturing process; if the machine optically scans the part and
determines it isn’t up to quality standards, it is automatically rejected. Although
this reduces the human labor involved, it still takes a great deal of time to
transmit the data to the centralized database and compute engine, which
determines the success of the manufacturing process and conveys the result
back to the machine. These kinds of delays are a problem even for high-speed
networks, as high bandwidth doesn’t guarantee low latency.
The cloud complicates this process further. Instead of sending the data from
the device back to the datacenter, it is sent to a remote server that could be
thousands of miles away, and, to make things worse, it is sent over the open
Internet. However, because of the amount of processing required in such cases,
the cloud might be the most cost-effective solution for developers.
Edge computing pushes most of the data processing out to the edge of the network, close to the source (an IoT device, such as a sensor). This model allows us
to divide workloads between the data segment (traditionally residing on a public cloud) and the compute segment (near the IoT device). The goal of edge
computing is to process the data needing rapid turnaround so it can quickly
return results to the device; in this case, the pass/fail results that indicate the
success or failure of the physical manufacturing of the auto part. Moreover,
the ability to continue processing when communication to the central system
has been interrupted.
However, the data should be stored near the device, which is typically temporary, and eventually be migrated to the public cloud for permanent storage.
Thus, we replicate processing and data storage close to the source, but it is
more of a master/slave type of architecture in which the centralized system
ultimately becomes the “source of truth” for all the data, and edge processing
is merely a node of the centralized system.
We need a better strategy and possibly an increased investment of time and
money in the development process to yield higher-performing and better-designed IoT systems that can cost-effectively handle the IoT’s increasing
demands and complexity.
Objectives
The IoT has dramatically changed the requirements for processing and managing data. Through a common architecture that leverages commodity cloud
and non-cloud technology, and that can be easily mapped to existing and
emerging technology, we’ll have a solid framework for a more responsive IoT
data-processing model that will actually reduce operational costs.
46 | THE DOPPLER | FALL 2016