Up, up and away: The future of data centres is in the cloud

Today, the amount of data stored in businesses globally totals 2.2 zettabytes, that’s over two billion terabytes or the equivalent of 320 billion full length DVDs! In 2013, this data is expected to grow 67 percent for enterprises and 178 percent for SMBs1. With businesses having to store and protect more and more digital content every day, the pressure is growing for organisations to invest in the expansion of the data centre. By Ciena.

  • 10 years ago Posted in

According to IDC there are over seven million data centres located worldwide, ranging from closet size in small and medium size companies, to stand alone facilities serving large enterprises to huge 500,000 plus square foot uber-sized facilities for data centre service providers. As businesses move some of their data processing to multi-tenant data centres and the cloud, it stabilises on-premise data centre capacity growth while adding to off-premise data centre provider demand.


Undoubtedly, with space comes cost. According to research from Canalys2, total investment in data centre infrastructure will grow 5 percent on average per year to reach $152 billion in 2016. Understanding the increasing costs associated with this phenomenal data growth means businesses are moving some of their data processing workloads to multi-tenant data centres and the cloud, especially for peak or unique processing requirements.


In 2008, online media creation business, Animoto, was one of the initial success stories for Amazon Web Services. Their Facebook application experienced such viral growth that 750,000 users signed up in three days.

There was no possible way a start-up company could install the sufficient data processing capability to meet this instant demand, but Amazon was able to scale cloud-based server capacity to meet the business growth. The number of organisations using cloud services has continued to expand from consumer-driven to enterprise- and government-driven demand, offering on-demand access to elastic pooled resources.

Many inter-data centre workloads have large peak or periodic data transfer needs to move video files, support workload balancing or quickly respond to unplanned proactive disaster avoidance. Describing this move in an article is simple enough, but in reality the process is far more complex. To understand this better, let’s imagine a company with a 200 Mb/s data connection to the world, which needs to make a server platform change. To do so without shutting down the business in the process, they’d like to temporarily move the applications on this server to the public cloud. Let’s further assume that this server is running 10 virtual machines of approximately the same size as an Amazon AWS medium server instance, with 5 GBytes of memory and 1,000 GBytes of storage per virtual machine.

This means the total data to transfer would be about 10 terabytes – 10 x (5 GB memory + 1,000 GB storage). However, transferring 10 TB of data over a typical 200 Mb/s network connection would take nearly a week, assuming full bandwidth utilisation, no re-transmissions and 80 percent utilisation. Clearly this company is not going to be able to run this simple workload job over this network service.

NaaS and Cloud to the rescue
This issue can be quite debilitating for IT organisations so, in
order to make this work, enterprises require new cloud network
connectivity options for efficient operations-driven networking
within an orchestration environment to provide Performance-on-Demand.

The network supporting this environment needs to be as scalable, and, in a sense, “virtualised,” as storage and servers are today. For example, using the above scenario, a flexible Network as a Service (NaaS) would enable bursting to a 1 Gbps network or more to get that workload transferred in less than a day, then return to the steady state level.


According to Gartner, in 2008, 12 percent of server workloads were virtualised. In 2014, Gartner expects to see numbers more than quadruple, with 60 percent of server workloads virtualised.


Enterprises are evolving their data centres through consolidation, virtualisation, data protection, and cloud services. A key business driver for enterprises is the need to manage growth with lower costs by being more efficient. Moving expenses from less flexible capital expenses to on-demand operating expenses is now a new option for data centre infrastructure based services.


The data centre is also changing for carriers and data centre service providers. Services are continuing to evolve from static physical capacity co-location, to managed services and now to Infrastructure as a Service (IaaS), turning computing and storage into on-demand, pay-as-you-go services. The evolution of cloud toward expanded enterprise IT utility services, and the service provider’s need to efficiently deliver cloud infrastructure services, is driving the creation of the virtual data centre architecture inter-connected with a cloud backbone network. Multiple provider and enterprise customer data centres are connected to enable workload orchestration, traffic generation and flow. The physical walls of individual data centres are effectively broken down to create a virtual data centre capacity encompassing multiple physical ones – a data centre ‘without walls’.


We use the term “Data Centre Without Walls” (DCWoW) to describe an architecture that creates a multi-data centre, hybrid cloud environment, able to function as one virtual data centre to address any magnitude of workload demand. Such an offering benefits both the cloud service provider or carrier and enterprise IT customers by creating seamless workflow movement and greater resource efficiencies.


A DCWoW enables effective asset pooling among data centres to deliver resource efficiencies up to 33 percent for service providers3, as well as increased resiliency and performance gains over isolated provider data centre architectures. This new hybrid cloud architecture delivers improvements in enterprise economics, helping IT to achieve the 25 per cent reduction in IT services and hardware expenses4 promised from cloud adoption.


In order to benefit from this operational efficiency, data centre providers will need to expand their attention from inside the data centre connectivity needs (intra) to inter data centre connectivity needs, specifically in support of new cloud applications that have dependencies on network bandwidth, scalability, latency and security. After all, the cloud is only as good as the network connection.

The ability to respond to varying workload demands with Performance-on-Demand is a key benefit of an intelligent network for the cloud. In addition to dynamic bandwidth, this network must have higher availability, lower latency and greater reliability, as it would be designed for critical infrastructure services.

With this sort of network, an IT manager will have the freedom to consider resources outside the physical walls of his or her building to be natural extensions of an owned data center. In effect, that IT manager would now have a “data center without walls,” that provides the same user experience as a completely dedicated data center, but on a pay-as-you-go basis, and thus be more economical.

As the DCWoW tools continue to mature for workload orchestration, the distinction between the enterprise and cloud data centres will continue to blur. The data centre of the future will be federated and intimately inter-connected with an intelligent network operating as the central-offices of the future. Reference
1. Symmantec 2012 State Of Information
2. Canalys ‘Data center infrastructure market will be worth $152 billion by 2016 ‘
3. Ciena study on potential savings in infrastructure costs with Ciena’s Performance On-Demand cloud backbone
4. Sand Hill: “Job Growth in the Forecast: How Cloud Computing is Generating New Business Opportunities and Fueling Job Growth in the
United States”, 2011