IF WE TAKE A MOMENT to step back and consider what we’re really trying to achieve, it’s not actually density, the reality is based in cold hard business economics like profitable revenue generation. Having more equipment in your data center simply leads to more revenue generating opportunities per kW, per cabinet, and ultimately per chassis deployed.
Looking at the physical layer within the data center, the obvious area requiring a focus on high density is within the patching chassis. This is the point where the network cabling is aggregated and represented to the active equipment. At this point, not having maximum density can be truly costly. Market research conducted at TE Connectivity, has shown that the cost to build out a square metre of white space in a data center can be as high as $15,000, before any equipment is deployed. That means any density inefficiencies at your aggregation points can result in the use of additional cabinets having to be deployed to house the cabling “overspill”. This can have three big financial impacts:
An additional $15,000 CAPEX (non-
recoverable) in whitespace cost per
additional aggregation point to cover lost
space
Loss of revenue generating opportunities
in the white space
Additional cabinet and containment costs
It is clear that high density can really make
a big difference in running a data center.
You could expect that you don’t get something for nothing and that there are the inevitable trade-offs that need to be taken into account when trying to achieve “ultra-high” densities in your next network design. These are not limited to, but include, manageability, adoption of new customer needs, upgrade options as the network gets faster, throughput performance and finally ease of use.
Until now, the general idea was that density could only be achieved at the expense of the other features mentioned in the paragraph above. But is this really true? Just as when one buys a high performance luxury sports car, you would expect that it will not only have “ultra-high” engine performance, but also a luxurious internal finish, seats that are “contoured” to fit your body shape, an in-car entertainment system that can match or exceed the sound of the engine!
So why wouldn’t the physical layer support “ultra-high” density and at the same time answer the needs of the operations team that needs to make sure that network availability, agility and efficiency are maintained 24/7? New technologies entering the market combine all these functionalities.
Increasingly, data center operations are being pushed to the cloud or 3rd party co-location environments. The advantage of these operations versus in-house DC operations is that the networking infrastructure is already in place and that the spend is moved from CAPEX to OPEX leading to very flexible business terms. Specifically noting the last point, billing by the second, day, month or year, introduces the ability for customers to pick and choose services and service providers with great ease, sometimes at the click of a button.
Conversely, a company offering such flexible operating terms and conditions has to be able to respond quickly and with agility to meet the constantly changing demands of its customer base.
Some changes can be handled dynamically within the switching fabric, however, concerned with security and latency, many customers are opting to have their own physical connection to the network. In this environment, being able to perform moves, adds and changes (MAC’s) at speed and accurately 100% of the time, gives service providers the ability to maintain Service Level Agreement (SLA) levels and revenue streams from the operations team going forward. The key to achieving this is having manageable density.
Being able to manage density can be achieved by simply reducing the number of ports in a chassis. However, this is unacceptable in a time where we see an increase in fiber connections within the data center due to hyperscale topologies, e.g. Leaf and Spine, Switched Fabrics. In these environments, the topology has become much “flatter” leading to any and every device becoming interconnected. To support this, it is critical to have the ability to make MAC’s to the cable presentation, both to the front and rear of the chassis, as additional networking, storage and compute power is added into the DC. A smart approach to design, including for instance a unique color identification scheme, could enable operations and installers to make physical changes without making mistakes.
As additional data throughput is being ramped up, one should also keep in mind how the chassis needs to be equipped with technology capable of supporting optical transmission with low loss characteristics. Having low loss optical characteristics allows support for complex routing of channels across interconnect and cross-connect environments without compromise. Which is especially important as optical loss budgets become increasingly stringent in the future.
In the legacy data center, network topologies being able to support fiber densities of 144 fiber ports per rack unit (RU) was deemed to be acceptable. Moving forward, DC operation teams are searching for even higher densities, in fact there are chassis solutions entering the market today with up to 210 LC ports per RU, an increase of 45% when compared to what’s currently available on the market.
In summation, density is definitely king and “ultra-high” density can make a real difference, especially in areas and DC types where profit per square meter is really valued. New techniques and solutions arriving on the market can give a significant up-lift in terms of fiber density per RU. Our advice would be to choose your next fiber chassis solution carefully and not to settle for anything less than “ultra-high” density.