Portal 2.0 brings a new operational capability that uses the same powerful predictive modelling technology that Romonet is well known for. Portal 2.0 adds the ability to compare ‘expected vs actual’ performance down to individual sub-systems, enabling operational teams to quickly identify operational issues and fix before they become service impacting.
“DCIM and sub-system metering has become very popular in the data center as operators grapple to manage the reliability & performance of their increasingly complex and costly data centers. However having actual metered data at such a granular level without knowing what each meter ‘should’ be reading can actually lead operational teams into a false sense of security. With Portal 2.0 deployed on top of a DCIM/metering solutions, operational teams can now see an ‘expected’ value against each sub-meter. Ultimately this allows operators to quickly spot divergences in performance anywhere in the system and stop potentially service disrupting issues before they occur. That’s the true power and nature of predictive analytics.” said Zahl Limbuwala, CEO of Romonet.
Romonet, often described as the Holy Grail for data center operators, is the only technology that brings together an in-depth view of facilities, IT and business to swiftly, simply and intelligently forecast, plan and track business performance.
Romonet Portal 2.0 is based on Romonet’s proprietary and award-winning predictive modeling technology; allowing users to forecast, plan and measure business performance with greater insight, accuracy and agility than previously possible. Unlike many DCIM solutions Romonet’s Portal 2.0 can be deployed and delivering value in days without disruptive and costly hardware or software agents. “Typically we can model any running data center in no more 3-5 man-days and have them up and running in Portal getting value in under two weeks” Zahl added.
Romonet Portal 2.0 helps organizations in the following ways:
- Identify exactly where and how many meters organizations actually need to use in their data center infrastructure to gain the best possible insight into their performance and therefore justifying an investment in the right level of metering.
- Reduce risk in capital investment by accurately modeling the outcome of each investment option.
- Optimize performance by swiftly identifying discrepancies between expected and actual performance of new infrastructure while still at the commissioning stage. Allowing issues to be quickly identified and addressed before going live.
- Lower the Total Cost of Ownership, by giving organizations insight into their IT costs and allowing them to see exactly where savings can be made.
- Increase predictability in IT strategy through allowing organizations to see precisely how their data center portfolio should be preforming at any point in each site’s lifecycle.
- Streamlined user interface, allowing organizations to quickly and intuitively identify and analyze discrepancies.
With Portal 2.0, businesses have an immediate view into how efficiently it is managing and utilising its data center infrastructure spend. As a result, it is much easier to reveal the true costs of IT services. For enterprises, this prevents the data center becoming a cost black hole: absorbing investment with no clear indication of how that investment benefits the business. For service providers, understanding total delivery costs means they can ensure that they understand margin per client or per service.
“Management by exception has proven invaluable to the success of many businesses: bringing it into the data center is a logical step,” says Zahl Limbuwala. “Portal 2.0 removes the headaches traditionally associated with capacity planning and cost modelling, allowing organizations to focus on the investment decisions that will best help their business.”