We have been following the progress of “fog computing” for a while, and as more enterprises realize its benefits, a variety of fog solutions have come onto the market. In its latest analysis, “The Fog Rolls In: Network Architectures for IoT and Edge Computing,” Stratecast: Frost & Sullivan provides some clarity about what fog computing is, the applications that need it and the different ways it can be delivered.
“Fog computing” is a phrase coined by Cisco in 2013 that describes a compute and network framework for supporting Internet of Things (IoT) applications, though the framework is not exclusive to IoT. There are other latency-sensitive, data-intensive applications that can leverage a fog computing architecture, such as when data triggers a local action that is time-sensitive, like when building sprinkler systems are set off by heat sensors.
Fog computing workloads are split among local and cloud environments, where different “things” (i.e., sensor-equipped, network-connected devices) quickly transmit data to locally deployed “fog” or “edge” nodes, rather than communicating directly with clouds. A subset of non-time-sensitive data is then forwarded from the fog nodes to a centralized cloud or data center for further analysis and action. By placing some analytics functionality close to the source, fog and other edge architectures can reduce the amount of data traversing the network, minimizing latency and costs.
The compelling reason for fog computing is physics. With all of the data traffic being created by millions of IoT devices, data traveling over multiple interconnection “hops” between source and destination points over the “best efforts” of the public Internet is littered with risks of data congestion and delay (latency) that negatively impact performance. It also opens the data up to loss, damage and cyberattacks. Fog computing avoids this scenario by splitting management functions between the network edge and the cloud.
Different Approaches to Fog Computing
Stratecast anticipates that enterprises will leverage industry leaders – such as network and cloud service providers, platform providers, and IT vendors – to provide fog computing tools and services that can be more easily and cost-effectively deployed. Network, system and cloud providers are working together to create fog computing solutions comprised of edge equipment (i.e., network connectivity, processor capacity, security, management and analytics platforms) and software (i.e., management, monitoring, security and analytics software) based on open standards that enable the seamless data sharing and processing between edge devices and the cloud.
Because fog computing is a framework, not an actual product, many enterprises tried to “do it themselves,” only to be faced with a lot of complexity and added resources and cost to get it right. That created a need for managed fog services where a third-party provider deploys and manages fog nodes for enterprises. Stratecast also envisions a market for “three-tiered” fog computing, where customers can choose single-tenant server and storage resources and reduce costs by sharing other resources such as analytics platforms.
At Equinix, we see that many enterprise fog computing challenges are similar to those faced by enterprises when deploying efficient compute and network architectures that may not be exclusive to IoT workloads. This is why we created an Interconnection Oriented Architecture™ (IOA™) to provide a framework for building mesh hybrid IT environments in which people, locations, clouds and data can be interconnected over high-speed, low-latency connections securely, efficiently and cost-effectively. We’re also setting up a “knowledge base” for enterprises based on real-world use cases which contains templates, patterns, architectural blueprints, cookbooks and deployment guides to help enterprises select, build and implement interconnection-centric architectures.