Maximising ROI from data center virtualization and consolidation through end user performance management

By Roger Holder - Fluke Networks.

  • 11 years ago Posted in

EFFECTIVE, persistent management of a consolidated data center is the key to unlocking return on investment. Organisations need a solution which can manage performance from the perspectives of all stakeholders – business units, IT and end users – to ensure the project delivers the required ROI.

Data centers are increasingly transforming from a traditional, distributed infrastructure to a consolidated, service-oriented structure. The business case is compelling, as consolidation enables organisations to implement more advanced protocols and management strategies that maximise bandwidth utilisation and performance of both network and applications. It also creates the opportunity to implement application virtualisation which offers further benefits.

Planning consolidation to maximise benefits
To realise these benefits organisations need to manage the consolidated architecture closely against performance metrics.
The costs of getting it wrong
£ Average cost of data center downtime
per minute: $5,600
£ Average reported downtime: 90 minutes
£ Average cost of incident: $505,000
Forrester Consulting 20111
To achieve the full benefits of consolidation and avoid expensive downtime, organisations need to follow a clear process:
£ Obtain an in-depth understanding of their
existing network, applications and
services and benchmarking performance
£ Set metrics for the desired performance of
the consolidated data center
£ Plan the transition
£ Implement the transition with minimum
downtime
£ Monitor and manage the updated
architecture to ensure it achieves the
required metrics.

Clear benchmarks are vital. Without metrics for pre- and post-consolidation performance, organisations cannot measure ROI. These need to look at the impact on all stakeholders – business unit owners, IT and operations staff, corporate management and end users. Organisations need to address three areas: reporting, performance management and personnel. The rest of this article will focus on the first two – reporting and performance management.

Reporting
With data center consolidation, resources that were previously distributed across the enterprise are gathered into a common pool. As a result, business units that once managed and maintained their own networks, applications and services have to relinquish control to a central team.

The business units now in effect become internal customers of the consolidated data center. To continue supporting it, they need to be assured that their critical applications are performing at or above the same levels as when they were controlled by the business unit. This means establishing internally facing service level agreements between the data center and the business units. Metrics such as application availability and end user response time for transactions between the desktop and the data center should be compiled, tracked and reported regularly to provide the evidence necessary to keep business unit owners on board.

Performance management
While data centre consolidation and the application virtualisation that often accompanies it may streamline enterprise architecture, they introduce management complexity. As more services are virtualised, it becomes increasingly difficult to provide a single view of application usage from data center to desktop, because a single physical server can power multiple machines. With database servers, application servers, email servers, print servers and file servers all potentially sharing the same piece of hardware, tracking network, application and service performance becomes much more difficult.

Finding the right management tool (or tools) is another challenge. Most legacy performance management tools operate best in a silo as they focus on a specific application, service, geographical or logical slice of the network. This may be acceptable in a distributed architecture – although problems can hide between an NMS without comprehensive information and complex packet capture tools – but causes problems in a consolidated data center, where the number of silos will grow with the addition of application virtualisation management tools which have not yet been integrated with the legacy performance management tools.
In this situation, network engineers have to rely on a set of disparate tools, each with its own unique capabilities and user interface. They have to use their experience to manually correlate information in order to identify, isolate and resolve problems.

In the best case scenario, performance management is carried out in a similar manner to the distributed environment, bypassing the opportunity to capitalise on collocated information and personnel. In the worst case, it results in finger pointing between operations and IT teams and lowers the efficiency of anomaly resolution, causing problems for both end users and management. To address these issues organisations need to find a better way of managing performance and reporting. Consolidating performance management
Businesses need an end-to-end solution with the scalability, breadth and depth to acquire, integrate, present and retain information that truly reflects the performance of the networks, applications and services from the business unit, IT and most importantly end-user perspective.

A consolidated performance management solution will provide information on all aspects of the network to all parties. This will assist in effective problem resolution without finger-pointing, as well as providing the data to calculate reporting and management metrics such as SLA performance, usage and billing. Measuring performance from a user perspective is more difficult with the additional layer of abstraction inherent to application virtualisation because there is usually less physical evidence available than in the traditional environment in which servers and applications are tightly coupled. It requires network engineers to adopt new configuration and monitoring methodologies, as there are fewer physical switches and routers to support.

Visibility and security within the virtual environment are huge concerns. Before, during, and after migration, it is critical to use SNMP, Net Flow, and virtual taps to monitor the health, connectivity, and usage of these now-virtual systems. Solutions from Fluke Networks, which measure from the end user into the data centre, enable engineers to understand what is happening on the network from the end user perspective and hence identify and resolve issues more quickly.

They can provide visibility into this front tier performance, as aggregated by site with per user comparisons, as well as views into the interactions the user had within their session, and per published application performance metrics, saving time when troubleshooting issues and helps network engineers to become proactive in managing the performance of this application delivery scenario.

For more information on end-to-end troubleshooting please visit our resource center www.flukenetworks.com/instantvisibility

 

Making the most of DCIM


Rob Elder, Director of Keysource.


MANY DATA CENTRE OWNERS and operators are missing a trick by not integrating their physical M&E infrastructure with the technology infrastructure to centralise monitoring, management and intelligent capacity planning of a data centre’s critical systems. This is backed up by a recent survey of data centre professionals that found that whilst almost three quarters are now familiar with Data Centre Infrastructure Management (DCIM) solutions there is still a long way to go before the adoption of this integrated approach to management and monitoring becomes widespread.
The majority of these data centre owners and operators already have some form of status and performance monitoring of mechanical and electrical infrastructure and more than 60 per cent do plan in advance the potential impact of changes to data centre utilisation, availability and efficiency.

However, almost half of them have no tools in place to track and manage IT assets in their data centre and only 40 per cent measure in real-time power usage across different sub systems and infrastructure. DCIM can now help achieve complete visibility over all assets to support faster and more informed decision-making, enabling operating efficiencies to be increased and energy consumption reduced.

These solutions can cater for all aspects of critical infrastructure, not just one or two such as efficiency or power usage, and enable integration with other building and IT systems.

In fact, organisations can effectively manage all of their critical systems and infrastructure to achieve a range of business and operational benefits. For example, the ability to identify at the deployment stage where equipment should go to best make use of resources such as space, network, cooling and power without limiting yourself in the future.

Meanwhile, the decision to build a new facility or upgrade an existing one to achieve additional capacity or improve efficiency can be better taken using added insight, without the need for lengthy site audits and consultant’s fees.
With the right data and knowledge and expertise to understand what critical systems are doing DCIM can deliver much more value and become an enabler to your organisation.
However, getting the basic metering and monitoring in place and implementing a solution is simply the first step, with the real opportunities coming from how you use the software and when you start to integrate it with other systems.
Virtualisation is already playing a major part in the way organisations view their compute and storage with workloads moved around between data centres. This combined with cloud computing offers the opportunity to maximise resource utilisation to meet business demand.

The integration of DCIM with the IT and BMS gives operators the opportunity to automate the movement of applications and workload dependent upon a range of scenarios.

Meanwhile, the biggest focus for improving efficiency in the data centre over the past
few years has been the cooling. From a design and build point of view this is absolutely the right approach to take, but its worth bearing in mind that the biggest waste of energy is actually at the server level itself with only about 6-12% of power delivered to a server being used to perform
a computation.

With power capping already part of the DCIM suite, the next step is to match the server capacity to a workload or cluster to better utilise server capacity. This can improve utilisation rates up to 80% and ultimately remove the need for so many servers.
DCIM is certainly here to stay, so organisations will have little choice but to adopt these types of tools if they want to continue to meet the demands of their business moving forward.

The technology and therefore opportunities will continue to develop and being able to take advantage will provide clear competitive advantage now and in the future.

 

PEDCA gets to work


I WRITE AT A PARTICULARLY BUSY TIME for the Data Centre Alliance, hot on the heels of the graduate “Bootcamp” project, government meetings and the “New Statesman” data centres special report, the PEDCA project starts to move out of the kick off phase and move into the interesting phase where it engages with the industry at large.

The Ä1.7Million European Commission funding is to deliver a training and research “Joint Action Plan” for the data centre industry; this is vital work for our sector. Therefore project PEDCA got to work with a wide range of industry leaders in London on the 25th September.

The event’s objective was to gain input into the scope of the project including ensuring its aims and objectives are valuable and focussed. The work included a review of the industry survey results and a series of workshop focus panels.

The Event kicked off with a series of presentations headed up by DCA Senior Vice President and Emerson CTO Prof Ian Bitterlin, who illustrated the industry’s issues and challenges. Following this were presentations on the project itself and a look at the PEDCA survey results. This got everyone in the right frame of mind for the workshops, which proved extremely valuable. Overall about 250 people attended the event which is a great start in ensuring the project moves ahead with a strong industry participant network.

The “joint” part of the “action plan” is absolutely essential for the project’s success- and that means you and your colleagues – if you missed this survey
and the event, don’t worry there is much more to come, but do not miss the opportunity to participate in the next survey coming soon which starts to move into territory that gets serious for all of us – if you are a DCA member, read your emails,
if you would like to participate on PEDCA you can do so free via
www.datacentrealliance.org