Better, faster, stronger… cheaper – why now is the time to move towards ‘design for operations’ in the IT world

As businesses continue down the path of digital transformation, and do away with legacy systems, there is also a need to rid enterprises of legacy mindsets. By Chris Swan, DXC Fellow, VP and CTO for Global Delivery, DXC Technology.

  • 4 years ago Posted in

One of the challenges many organisations face is moving from buying software or systems to use for as long as possible, before updating them in order to improve their ROI. Today, the key to ensuring continual improvement is to adopt a “design for operations” approach to software development and delivery. Design for operations means software is designed to run continuously, with frequent incremental updates that can be made at scale, rather than single large updates which are cumbersome and complex.

 

Designing for operations takes into consideration the end-to-end costs of delivering and servicing the software, not just the initial development or purchase costs. In addition, it is based on applying intelligent automation at scale and connecting ever-changing customer needs to automated IT infrastructure, thus ensuring it forms part of the core of digital transformation initiatives. The entire process relies on DevOps, enabled by software pipelines that support continuous delivery. So how can businesses make use of an approach that encompasses design for operations?

 

First, it is important to understand that products and services pass through various stages of design evolution:

 

  1. Design for purpose (the product performs a specific function)
  2. Design for manufacture (the product can be mass produced)
  3. Design for operations (the product encompasses ongoing use and the full product life cycle)

 

The car industry is a good example; from Daimler’s horseless carriage, to Ford’s Model ,T and finally to a modern VW Golf (or any other vehicle that can be sold with a service plan). The act of including a service plan means the manufacturer incurs the costs of servicing the car after it’s purchased, so the car maker is now responsible for the end-to-end life cycle of the car.

 

The technology industry is no different — from early code-breaking computers like Colossus, to packaged software such as Oracle, to software-based services like Netflix. The key point is that software-based service companies like Netflix have understood they own the end-to-end cost of delivering their software, and have optimised everything accordingly, using practices we now call DevOps.

 

There are efficiencies that can be achieved only with software designed for operations. This means that companies running bespoke software (designed for purpose) and packaged software (designed for manufacture) have a maturity gap, where the liability is greater than the value. If that gap can be closed, delivery can be better, faster and cheaper (there is no need to pick just two).

 

For enterprises it’s essential to close this gap, because if competitors can deliver better, faster and cheaper, that puts them at a huge advantage. This even includes the public sector, since government departments and local authorities are all under pressure to deliver higher quality services to citizens at a lower cost to the government.

 

Learning to “shift left”

 

Thinking back to the design-for-purpose example, central to this approach is that functional requirements (what the software should do) are pursued over non-functional requirements (security, compliance, usability, maintainability). As a result, things like security are usually bolted on later in the creation process. In many cases, this lack of functionality starts to accrue as technical debt — that is, decisions that may seem expedient in the short-term become costly in the long-term.

 

The concept of “shifting left” centres on ensuring all requirements included the design process from the beginning. Think of a project timeline and “shifting left” the items in the timeline, such as security and testing, so they happen sooner. In practice, that doesn’t have to mean a lot of extra development. Careful choices of platforms and frameworks can help to ensure that aspects such as security are baked in from the beginning. This means that systems and services will be far more secure than a system that simply has security bolted on at the end of the process, as there will be less opportunity for hackers to find gaps in defences.

 

Our contemporary development practices support this when we ask, “how do we know that this application is performing to expectations in the production environment?” This moves way past “does it work?” and starts asking “how might it not work, and how will we know if it doesn’t?” It turns the whole design process on its head, as it forces everyone involved in the process to consider not just what the process needs to encompass, but when it will be most advantageous to execute and implement each part.

 

For enterprises, there are huge advantages in adopting a design for operations model that includes a comprehensive approach to intelligent automation. By combining analytics, lean techniques and automation capabilities, service-based solutions can produce greater insights, alongside improved speed and from day one. In addition, keeping systems running at their best is much easier to manage, as the process for incremental upgrades is set in motion from the start of the project.

By Ram Chakravarti, chief technology officer, BMC Software.
Anders Brejner, Investment Director and Enabling Solutions Lead at Circularity Capital, discusses...
By Andy Baillie, VP, UK&I at Semarchy.
By Paul Gampe, Chief Technology Officer, Console Connect.
By Aaron Partouche, Innovation Director, Colt Technology Services.