Scrutiny on the Data Supply Chain

While most commonly associated with the manufacturing process, the term ‘supply chain’ is now frequently being applied to the management of data within financial services firms. In plain terms, a supply chain is a sequence of steps in which raw materials are gathered and ultimately turned into a finished product that is then delivered to the end customer. These processes remain the same for businesses across the financial services space; however, rather than the raw materials dealt with in manufacturing industries, they are working with raw data. By Martijn Groot, VP of Product Management, Asset Control.

  • 5 years ago Posted in

The raw data works its way along the supply chain from the data management phase, where data is sourced quickly and cost-efficiently, to the ‘assembly line’ stage where the data is cross-referenced. Of course, whether we are thinking about a manufacturing supply chain or a data supply chain, being able to trace materials or data across the whole process is very important. In the case of the latter, financial services companies need to understand and audit what happens to the data across the process, who has looked at it, how it has been verified and they also need to keep a full record of any decisions that are made. Ultimately, they need to ensure traceability, proving they can track the journey of any piece of data across the supply chain and see both where it has been and where it finally ends up.   

 

The benefit for financial services firms who reach the end of this data supply chain is that the result of this process supports informed opinion that in turn drives risk, trading and business decisions. 


Bringing the data together in this way is important for many financial services firms. After all, the reality is that these businesses, today even more than pre-crisis, typically have many functional silos of data in place, a problem made still worse by the preponderance of mergers and acquisitions taking place across the sector in recent times. Typically today, market risk may have its own database, so too credit risk, finance stress testing and product control. In fact, every business line may have its own data set. Moreover, all these different groups will all also have their own take on data quality. 

 

Many financial services firms increasingly appreciate that this situation is no longer sustainable. The end to end process outlined above should help to counteract this but why is it happening right now? 


Regulation is certainly a key driver. In recent years, we have seen the advent of the Targeted Review of Internal Models (TRIM) and the Fundamental Review of the Trading Book (FRTB) both of which demand that a consistent data set is in place. It seems likely that the costs and the regulatory repercussions of failing to comply will go up over time.

 

Second, it is just becoming costly to keep all these different silos, many of which are internally developed systems, alive to support it. The staff who originally developed them are often no longer with the business or have a completely different set of priorities, so it makes for a very costly infrastructure. Finally, there is a growing consensus that if a standard data dictionary and vocabulary of terms and conditions are used within the business, and there is common access to the same data set, this will inevitably help to drive a better and more informed decision-making process across the business.

 

Finding a Way Forward

 

So how can organisations start to address these issues and find a way of overcoming the data challenges outlined above? They can begin by ensuring that they have a 360˚ view of all the data that is coming into the organisation. They need to make sure they know exactly what data assets there are in the firm – what they already have on the shelf, what they are buying and what they are collecting or creating internally. In other words, they need to have a comprehensive view of exactly what data enters the organisation, how and when it does and in what shape and form.

 

This is far less trivial than it might sound because in large firms in particular, often due to organisational or budgetary fault lines, organisations may often have sourced the same data feed multiple times, or they might find that the same data product or slight variations of it may be brought into a business on multiple occasions or via different channels.

 

Firms need to, therefore, be clearer not only about what data they are collecting internally but also what they are buying. If they have a better understanding of this, they can make more conscious decisions about what they need and what is redundant and prevent a lot of ‘unnecessary noise’ when it comes to improving their data supply chain.

 

They also need to be able to verify the quality of the data of course – and that effectively means putting in place a data quality framework that encompasses a range of dimensions from completeness to timeliness, accuracy, consistency and traceability     

 

To deal with all these data supply chain issues, of course, businesses need to have the right governance structure and organisational model in place. Consultants can help here in advising on processes and procedures and ensure for example that the number of individual departments independently sourcing data is reduced and there is a clear view in place of what is fit for purpose data.

 

If the right processes and procedures are in place, however, alongside a good governance structure, the organisation can start to think about a technological solution.

 

The Role of Technology

 

Technology can play a key role in helping organisations to get a better handle on their data supply chains and can be used to fulfil a number of duties. For example, technology can be used for data sourcing and integration, supporting workflow processes and data reporting. In each instance, businesses will have set requirements they need to meet to ensure the processes run quickly and efficiently and as such it is important to spend time finding the right systems with the right capabilities. Finally, financial services businesses should address any gaps in data and where the organisation is in providing data to business users for ad-hoc usage.

 

The issue of providing data to business users is a particularly important point that should not be forgotten in the data supply chain. As such, as well as understanding and monitoring their supply chains and ensuring that an auditing and traceability element is in place, financial services businesses must also ensure that data governance and data quality checking is fully implemented. After all, financial services businesses will only gain limited value from their data supply chains, however efficient, if they do not make the data itself readily available to users to browse, analyse and support decision-making processes that ultimately contribute to driving business advantage and competitive edge.

 

By Andy Baillie, VP, UK&I at Semarchy.
By Kevin Kline, SolarWinds database technology evangelist.
By Vera Huang, Sales Director, Data Services at IQ-EQ.
By Trevor Schulze, Chief Information Officer at Alteryx.
By Jonny Dixon, Senior Project Manager at Dremio.
By James Hall, UK Country Manager, Snowflake.
By Barley Laing, the UK Managing Director at Melissa.