At the same time, the maturing of DevOps puts even more demand onto data, and as enterprises have started to consider using the cloud for core systems, we begin to wonder how to even move massive datasets to the cloud, let alone protect and manage them when they’re there.
A movement emerged last year that claimed it could help. The goal of DataOps is to bring together those who operate data and those who consume data, in order to make business run faster, more securely and better comply with regulations. But what are we to expect from DataOps in 2018?
GDPR: the key to better security
In today’s software economy, data has become one of businesses’ most valuable assets. It is shocking to know that the lifespan of the S&P 500 was 61 years in the 1950’s, now it is 17 years. Today’s businesses need to move fast and use the data available to drive innovation. However, in the pursuit of continuous delivery, data is being distributed across multiple environments. Overall, 90% of all data in an enterprise resides within non-production environments, where copies are used for testing, development, reporting, analytics, and archiving. This data is typically less secure than production copies, making it more vulnerable to cyber-attacks.
GDPR was designed to address how all data is being used in a new era. It will change the perception of data and its security to bring it in line with modern data practices. We’ve seen the initial seeds of this change start to take root in 2017. In the year ahead, we will see GDPR force organisations to take data protection one step further and start viewing it as a basic human right. It will require organisations to integrate data protection into their DataOps teams, process and tools. The ability to replace personal identifiable information with a non-sensitive equivalent–data masking–will propel companies towards compliance. At the same time, it will ensure that the data is still accessible and valuable so organisations can use information assets to drive innovation.
Rewriting the physics of data with a multi cloud approach
As we turn the page on 2017, we move on to a new chapter of the cloud era; the multi-cloud era. Organisations have already gained some experience with cloud services in one iteration or another. They are now looking at who will be their next generation of cloud providers and are less willing to put all of their eggs in one basket. However, with a movement towards multi-cloud comes a new set of challenges. Applications containing sensitive data must be properly secured, while cloud migration projects are risky and complex, requiring data for many testing and rehearsal iterations. To accelerate cloud projects, organisations must again turn to DataOps to align data operators with the cloud consumers. DataOps tools like the dynamic data platform allow the transition of application landscape to multiple clouds at speed and with as little risk as possible. DataOps teams can support the cloud by identifying and securing sensitive information, replicating secure data or data changes to the cloud and supplying test data for migrated applications ahead of go-live.
Move to open banking
With the introduction of the second payment systems directive (PSD2), banks will have to embrace the concept of open banking. For the first time, banks will be mandated to open their Application Programme Interfaces (APIs), and required to develop their security and messaging standards.
PSD2 will herald a new wave of third party banking applications and integrations, all designed to use existing financial data. These will need to undergo thorough testing, and data will either make or break this process. By establishing DataOps practices early on and putting in place strict processes and guidelines, these new apps will not only thrive but create entirely new markets.
DevOps meets DataOps
Just as DevOps was a key driver of the first 10 years of the cloud era, transforming the way that organisations thought of IT, DataOps will be a key driver of the next 10 years. DataOps offers the solution to many of these issues that organisations will face in 2018. It will also be the technological driver in the coming decade. Why? Because it reduces the principle enemy of businesses everywhere; data friction.
Within all DevOps teams, there is conflict between those data operators that manage large and complex data sets and those data consumers that need fast, secure access to data sources to drive the innovation that runs the business. This friction limits the number of development environments, restricts the speed at which testing can be done, and directly impacts a DevOps team’s ability to achieve Continuous Integration and Delivery. However, with a dynamic data platform that ensures sensitive data is secured and the right data made available to the right people at the right time, data operators and consumers can work as one. DevOps is unlocked and able to drive revenue and productivity business wide.
DataOps will help organisations worldwide combat the difficulties that lie ahead in the coming year, such as GDPR, unstable data security, the move to multi-clouds and to help DevOps reach it’s true potential. Fundamentally, 2018 will be the year which will see the DataOps revolution take hold.