Article Cloud Computing Resiliency at the Edge
By Insight Editor / 25 Mar 2019 / Topics: Cloud Data center
By Insight Editor / 25 Mar 2019 / Topics: Cloud Data center
As a result, we’re left with what we refer to in this blog as a “hybrid data center environment.” That is, an environment consisting of a mix of (1) centralized cloud data centers, (2) regional medium to large data centers, and (3) localized, smaller, on-premises data centers. What once was a 1MW data center on-premises at an enterprise branch location may now consist of a couple of racks of IT equipment running critical applications and/or providing the network connectivity to the cloud. The decreased footprint and capacity of the on-premises data center should not be equated to being lower in criticality. In fact, in many cases, what’s left on-premises becomes more important.
The centralized cloud was conceived originally for certain types of applications – i.e. email, payroll, social media. These were applications where timing wasn’t absolutely crucial. But as critical applications shifted to the cloud, it became apparent that latency, bandwidth limitations, security, and other regulatory requirements had to be addressed. Think of the application of self-driving automobiles. There is an extensive amount of compute required for this application to run successfully, and latency can’t be tolerated or people get into accidents. Healthcare is another life critical application; sensors collecting data on patients, or surgical tools providing surgeons with real-time intra-operative feedback. The need to bring the compute closer to the point-of-use became apparent.
High bandwidth content distribution is another application that benefits from bringing the content closer to the point-of-use. Bandwidth costs are reduced and streaming is improved.
For many enterprises, there is often a need (or desire) to keep some business critical applications on-premises. This allows for a greater level of control, including meeting regulatory requirements and availability needs. Sometimes these applications are replicated in the cloud for redundancy. Schneider Electric White Paper 226, The Drivers and Benefits of Edge Computing, further explains these applications driving us to an ecosystem that includes more regional and localized data centers. In this section, we’ll describe each of these data center types and discuss the typical physical infrastructure practices deployed in each.
Large multi-megawatt centralized data centers, whether they be part of the cloud or owned by an enterprise, are commonly viewed as highly mission-critical, and as such, are designed with availability in mind. There are proven best practices that have been deployed for years to ensure these data centers do not go down. Facilities and IT staff operate these sites with the number one objective of keeping all systems up and running 24/7. In addition, these sites are commonly designed and sometimes certified to Uptime Institute’s Tier 3 or Tier 4 standards. Colocation and cloud providers often tout these high availability design attributes as selling points to moving to their data centers.
Common best practices seen include:
As discussed earlier, connectivity to the cloud is crucial for the edge sites. Yet, often times, there is a single internet service provider providing that connection. This represents a single point of failure. Cable chaos in the networking closets also breeds human error.
Best practices to reduce these risks include:
Cloud adoption is driving more and more enterprises to a hybrid data center environments of cloud-based and on-premises data centers (the edge). Although what’s left on-premises may be shrinking in physical size, the equipment remaining is even more critical.
This is because:
Unfortunately, most edge data centers today are fraught with poor design practices, leading to costly downtime. A systematic approach to evaluating the availability all data centers in a hybrid environment is necessary to ensure investment dollars are spent where they will get the greatest return. A scorecard approach was presented which allows executives and managers to view their environment holistically, factoring in the number of people and business functions of each data center. This method identifies the most critical sites to invest in.
Prefabricated micro data centers are a simple way to ensure a secure, highly available environment at the edge. Best practices such as redundant UPSs, a secure organized rack, proper cable management and airflow practices, remote monitoring, and dual network connectivity ensure the highest-criticality sites can achieve the availability they require.