From the early part of the last decade through 2008, the financial services industry was awash in profits, a party where nearly everyone had their fill. Then, in late 2008, the excess finally caught up with the markets and a lengthy hangover ensued. Four years later, the financial system still has a significant number of toxins within its assets, operations and, at the core, its data.
In data management terms, toxicity can be described as the vestigial data processes that have been built to support the very complex and global businesses of trading and investment in the last decade. The problem is that for many institutions, the sheer scale of data management has created monumental, monolithic structures that focus on delivering a single view of the truth when in fact multiple golden copies are required. The alternative -- having data sourced individually by each system, business unit or asset class -- is no better. In light of the substantial data management requirements financial services firms now face, this leads to chaos, duplication, inefficiency, obscurity and more cost.
When you consider how fast the markets, and now the regulators, can move, the need to deliver pricing, risk management and compliance mechanisms in a timely manner has meant that the industry has gorged on quick data fixes. The result is that data management often resembles a mishmash of supposedly temporary fixes, unlikely appendages and redundant mechanisms. As the hangover from the global financial crisis continued, it became clear how inefficient these haphazard methods were at managing the risk across the financial markets.