For years, we’ve heard that our best bet is to put everything into the cloud. Now, the computing action seems to be moving away from centralized services out toward the edge — embedded systems, sensors, kiosks, point-of-sale terminals, mobile devices, wearables, robots, the internet of things, you name it. They demand resident software, and produce and store local data. This means software and data run — and require support — at a million different locations. How should technology professionals prepare for this ever-edgy state of things?
It’s a big deal. An average of 35% of US computing resources now reside at the edge, according to estimates by IDG/Foundry, in a survey commissioned by Insight Enterprises and reported by Megan Crouse in TechRepublic. In addition, 36% listed the need to process data from edge devices as a top objective, an increase from 27% the year before. There’s low latency in localized data processing, as well as the security of not having data in motion.
Also: These are the most in-demand tech roles in 2023
Industry observers agree that edge systems will increasingly do the bulk of information technology work. “Machine learning and aggregation-type computations are being deployed more and more at the edge,” says Rob Mesirow, partner and connected solutions/IoT leader for PwC. “The key idea is to reduce the size and the number of events that have to be sent to the cloud. Computations that can be performed in a streaming fashion on a bounded number of data streams can be easily shifted to the edge.”
Real-time response time “is hard to achieve at scale with a single centralized cloud computing cluster,” says Jeff Fried, director of product management at InterSystems. “Similarly, real-time and near-real time analytics are achievable, and real-time insight is very popular, once you realize you can achieve it.”
The push to the edge is a trend that will not let up anytime soon. “As networks are built out, the window to introduce the next great technologies and capabilities will open wider and wider,” says Adam Compton, director of strategy at Schneider Electric. “These capabilities will have a profound impact on us all, but will require tremendous, localized computing capabilities to ensure latency is almost nonexistent.”
Also: Meet the post-AI developer: More creative, more business-focused
At the same time, the edge simply may not yet be ready for all the computing power and data moving its way. “Much of the data being generated has yet to be leveraged in a way that incorporates AI and meaningful outputs,” Compton cautions. “Networks are still growing. Bottlenecks are being slowly addressed. Throughput and latency are improving, but there is still lots of work to be done before things really explode at the edge.”
As a result, successful development of the next generation of killer edge applications will hinge on “continued upgrades to the fiber and network infrastructure, the birth of smart cities, and the evolution of AI and AR will lead to the next killer applications,” says Compton.
Effectively leveraging all the data flowing in from the edge is another challenge enterprises need to get their arms around. “Even though IoT has been in the spotlight for a few years now, most companies have yet to fully take advantage of IoT regardless of whether they have already deployed IoT solutions,” says Mesirow. “Part of the problem is that IoT data itself is worthless unless it is tied to a solution for a particular business problem. Making the leap from collecting operational IoT data to IoT insights is non-trivial and a lot of companies struggle with this.”
Also: Low and no-code software may soon test the limits of IT hand-holding
Technology staffs “accustomed to focusing on availability need to start focusing much more on response time,” says Fried. “Typically, data from devices must be combined with data from other sources to be meaningful. For example, bedside medical device data must be correlated with data such as the time, location and identity of the patient. In most cases, that data is locked away in various systems and locations.”
Compton agrees that handling such huge data streams will take time. “We’ve all had the experience of knowing that a valuable data set exists but not knowing how to access it, organize it, nor view it,” he says. “Big data may be an old term by now, but that doesn’t mean the era is over.”