Size gets a lot of stress when it comes to data. The volume is right there in the once omnipresent term “big data”.
This is nothing new. When I was an IT industry analyst, I once noticed a research note This marketing copy focused too much on the bandwidth numbers associated with large server designs, and wasn’t enough on the time between a request for data and its initial arrival – latency.
We have seen a similar dynamic with regard to the Internet of Things and edge computing. With ever-increasing amounts of data being collected by ever-increasing numbers of sensors, there is definitely a need for that data to be filtered or aggregated rather than shipped across the network to a central data center for analysis.
[ Read also: IT decision-makers are prioritizing digital transformation. ]
Actually there. Red Hat recently conducted Frost and Sullivan for 40 interviews with line of business executives (along with a few IT roles) from organizations with more than 1,000 employees globally. They represented companies in manufacturing, energy and utilities divided between North America, Germany, China and India. When asked about the main drivers for implementing edge computing, bandwidth issues emerged, as well as issues with having too much data in a central data center.
Latency and connectivity top the list
However, interviewees focused largely on latency and, more broadly, their reliance on network connectivity. Triggers such as the need to improve connectivity, increase computing speed, process data faster and on-site, and avoid data latency caused by moving data to the cloud and back were popular topics.
For example, one decision-maker in the oil and gas industry told us that moving computing to the edge “improves your ability to respond to any episodic situation because you no longer have to centralize everything. You can take local data and run it through sophisticated computing framework or models, and take Real-time decisions. The other is in terms of general security. Now that you don’t leave your data, and it is both produced and consumed locally, the risk of someone intercepting the data as it traverses the network pretty much vanishes.”
For another data point, Red Hat and Pulse.qa The IT community survey found that 45% of 239 respondents said lower response time was the biggest advantage of spreading workloads to the edge. (The second result was optimal data performance, which is at least related.) Lower bandwidth? It was lower in the single digits (8%).
Response time was looming large when we asked our interviewees what they saw as the most important benefits of edge computing.
Response time was also on the horizon when we asked interviewees what they saw as the most important benefits of edge computing.
The most important benefits cited were related to instant access to data, including the desire to access data in real time so that it could be immediately processed and analyzed on site, eliminating data delays caused by data transfers, and 24/7 access to data week days. To reliable data – raising the possibility of continuous analysis and the availability of rapid results. The common theme was executable local analysis.
Cost has emerged as an advantage of edge computing here and there—particularly in the context of reducing cloud usage and related costs. However, consistent with other research we conducted, cost has not been cited as a primary driver or benefit of edge computing. Instead, the drivers are mostly access to data and related gains.
Hybrid cloud, data is the engines
Why are we seeing this increased focus on edge computing and associated local data processing? Our interviews and other research suggest that two reasons may be particularly important.
The first is that 15 years after the public cloud was first introduced, IT organizations are increasingly adopting a clear hybrid cloud strategy. red hat Global Technology Outlook Survey 2022 It found that it was the most popular strategy for cloud among more than 1,300 participants in the field of IT decision-making.
The public cloud was first the least Shared cloud strategy was lower than the previous year’s survey. This is consistent with data we’ve seen in other surveys.
None of this means that generic clouds are in any way a passing fad. But edge computing has helped focus attention on computing (and storage) at the various edges of the network rather than focusing entirely on a plethora of public cloud providers. Edge computing added a rationale that public clouds would not be the only place where computing would occur.
The second reason is that we perform more complex and data-intensive tasks on the edge. Interviewees told us that one of the main drivers for implementing edge computing is the need to embrace digital transformation and implement solutions such as the Internet of Things, artificial intelligence, connected cars, machine learning, and robotics. These apps often have a cloud component as well. For example, it is common practice to train machine learning models in a cloud environment and then run them at the edge.
We’re even starting to see Kubernetes-based cluster deployments On the edge with a product like Red Hat OpenShift. Doing so not only provides scalability and flexibility for high-end deployments, but also provides a consistent set of tools and processes from the data center to the edge.
Not surprisingly, data area and response time are important characteristics of a hybrid cloud that edge deployment may be a part of. Monitoring and oversight are also important. As well as thrift and other aspects of management. And yes, bandwidth – and link reliability – play into the mix. This is because a hybrid cloud is a form of a distributed system, so if something is important in any other computer system, it is likely to be important in a distributed system as well. Maybe more.
To learn more, visit Red Hat over here.