In the case of a data centre, an organisation must ensure that it is operating as close to 100 percent uptime while delivering all of its services as reliably and efficiently as required. As well as likely regulatory pressures motivated by environmental concerns, there is a perennial need to keep costs to a minimum. Operating a data centre efficiently requires stringent management oversight at all stages beginning with the data centre design.
The most obvious costs are incurred by the infrastructure itself: how many physical servers does the organisation need? How much storage? What level of cooling is required? What sort of UPS capacity? The answers to these questions can only be provided by a detailed requirements analysis of the current and future IT load and the level of service expected of the data centre - considerations which are required both at the start of operations and throughout its life.
In the case of UPS protection, essential to guarantee the continuity of power in the data centre, it would be highly inefficient to have 500kW capacity if, after ten years the data centre was only running at 250kW or less. Making an accurate assessment of likely load, although difficult, is essential.
A better approach is to right-size by deploying modular and scalable UPS’s, which can support increases and decreases in capacity to meet ever-changing business demands. This provides the best possible trade off between reliability and efficiency.
Cooling is another costly function which is also dependent on the IT load of a data centre and therefore has to be considered at the design stage. An example of where careful planning, coupled with ongoing monitoring and measurement of operations resulted in major efficiency improvements is the data centre at Cardiff University which houses, among other things, a high-performance computing hub for use in advanced research projects.
Installing high-efficiency chillers at the design stage, plus the implementation of a specific software module dedicated to efficiency monitoring in the DCIM software suite allowed highly granular insights into how control changes affected the operation of the data centre. The result was a significant reduction in PUE and consequent savings in energy costs.
White Paper #221 by Kevin Brown, Vice President of Data Center Global Solution Offer & Strategy at Schneider Electric, discusses the impact of rising temperatures on data centre efficiency - stating that whilst raising temperatures may improve cooling efficiency to provide the perception of cost savings - these may be offset by an increase in IT energy consumption as well as the total consumption of energy required by the cooling infrastructure.
By incorporating scalable technologies such as In-Row cooling, the separation of warm and cold air via containment, businesses can realise significant efficiency gains whilst also benefitting from improved redundancy.
Of course design is only the first stage of efficient operation. To match power and cooling functions to the IT load on an ongoing basis requires continuous monitoring, measurement and management of all operational aspects.
Data-centre management should also be provided with the correct level of information, allowing them to be able to make as accurate a prediction as possible concerning initial and future IT loads.
Fortunately there are ample software tools available to support these processes. Data Centre Infrastructure Management (DCIM) software suites such as Schneider Electric’s StruxureWare for Data Centers™, can measure the effectiveness of power and cooling systems, whilst also communicating seamlessly with IT infrastructure and traditional IT management tools from leading vendors including Cisco, HP, Intel and Microsoft.
As well as helping to reduce the immediate costs of operating power and cooling systems, such tools provide more intangible benefits by helping to reduce the overall IT load in the first place - with knock-on benefits for the amount of infrastructure required. With the increase of convergence and virtualisation, the ability to run numerous virtual machines on a single server enables the identification of stranded capacity and means applications can run at their most optimal level. Ultimately, this will result the reduction of the physical infrastructure that is required.
As you turn off more servers while delivering the same amount of applications and data, you reduce the power and cooling load that is required. The information collected here has to be fed back to those responsible for future planning and change management, providing an ever more accurate picture of capacity. Therefore supporting more efficient designs for future generations of data-centre deployments.
The cost benefits of such an approach are not immediate. With metrics like PUE, one can see an immediate benefit to implementing a more efficient data centre. However in order for a data centre to remain truly efficient, it needs to be designed with the input of key business stakeholders such as IT, Facilities and Finance.
To often we witness static, outdated server rooms, comms-rooms and data centres being provisioned which have no bearing on actual IT needs. This results in data centres being built as an afterthought, rather than as a critical business function.
Feeding the lessons learned back into the planning process may not produce such visible benefits in the short term but in the long run reap rewards by producing better needs analysis, optimising the design and consequently improving the efficiency of data centres in the future.
It allows IT organisations to get closer to the idea of right sizing. Providing the right amount of power and cooling infrastructure needed to support the necessary IT equipment required to deliver business strategies.