You have been granted access to this page through First Click Free. Subsequent use of TabbFORUM will require logging in. If you don't have an account, registration is free.

Videos

  • Rail_thumb_screen_shot_2012-11-07_at_4

    Threading the Market Data Needle

    Frank Piasecki of ACTIV Financial delves into the difficulties market participants face in extricating themselves from costly enterprise-wide legacy data management systems in favor of faster, cheaper ...
     
  • Rail_thumb_screen_shot_2012-10-15_at_12

    Risk Analytics and Pre-Trade Pricing in Fixed Income

    TABB's Robert Iati and Will Rhode discuss the new partnership between BlackRock and Thomson Reuters to create and distribute a set of fixed income derived analytics to support risk management of fixed income ...
     
  • Rail_thumb_screen_shot_2012-10-01_at_9

    The End of European HFTs?

    Larry Tabb, TABB Group CEO and founder, discusses the findings from the European MiFID II rulings on Equity Market Structure with Adam Sussman, Partner and Director of Research of TABB Group. ...
     
 

More Video | Podcasts

Advertisement
Spotlight-black
High-Speed Data and OTC Markets

22 October 2012

Big Data, Small Governance: Why Federated Data Strategies Will Win

Firms have invested millions of dollars in centralized data infrastructure, but most of these investments result in bureaucracy, hindering growth, stifling innovation and increasing costs while doing little to solve the problem, says TABB's Larry Tabb.

The financial market industry doesn’t sell socks, computers or dishwashers. Instead, we sell the intangible: hope, dreams and fear.

Working as we do in an industry focused on the intangible, we quantify our goals, progress and risks with data – in fact, lots of it, surrounding ourselves at times with an overwhelming amount of data about our investments, positions, risks and opportunities.

We also interrogate that data: How much can we make? What’s our downside? How much risk are we exposed to? What happens if we’re right? And, more important, what if we’re wrong?  

Because our world revolves around data, you would think that our data infrastructures are robust. But unfortunately, they’re not nearly as robust as you’d think, or for that matter, as regulators require.

Most firms -- from banks to brokers, investors, hedge funds and custodians -- have a rather difficult time managing their data. While those firms that invest in a single geography, asset class or product do have an easier time, firms investing across all three dimensions are lucky if they can even reconcile their assets, not to mention their counterparties, issuer risk and the market risk they’ve underwritten.

This is not just a technology challenge either. Because systems have been acquired, assembled and developed over time with varying technology and data architectures, it is an organizational challenge. 

Different businesses have different versions of the same data. A universal data truth would be nice but as companies’ legal infrastructures become more complex, new financial products continue to be developed and data architectures are squeezed into older technologies, this creates data discrepancies and inconsistencies that make it harder to aggregate, assimilate and normalize enterprise information.  

In our very complex, interconnected and fragile world, these complexities and the data they represent do matter.  But how can anyone at a senior level assess this inter-connectedness and inter-dependencies when an issuer underwrites securities composed of hundreds of loans originated by various subsidiaries that are guaranteed by third parties that obtain financing and insurance from the original issuer?

It’s safe to say that many Lehman Brothers counterparties had no idea of the extent of their exposure -- until it was too late.  Unfortunately, little has changed in the race for clean data since 2008 and Lehman’s fateful demise.

Firms, especially larger ones, have heavily invested in their data infrastructure. It is not unusual to hear of investments of $50 million to $100 million to solve these problems. However, most of the firms that have made these investments do not see the returns they are looking for and most of these initiatives result in an enterprise-wide bureaucracy, hindering growth, stifling innovation and increasing costs while doing little to solve the problem.

This is because most of these large-scale data initiatives start with a faulty premise: Given unlimited centralized resources, any problem can actually be solved. But the challenge is that this data issue cannot be solved by an enterprise or even a divisional solution -- because the problem is simply too big.

Today, centralized solutions require a comprehensive view of an enterprise’s data usage. We need to believe the enterprise can understand, agree and manage not only how the data should be represented, but how it should be stored and normalized. Even if one internal team could fully understand the difference between an equity, convertible bond, debenture, securitized note, future, option and swap, they would also have to know how each product gets calculated and used and the tangential risks associated with each product, not to mention the issuer chain and various product dependencies.

This is a Herculean task, which is why enterprise data projects extend over years, cost tens if not hundreds of millions to implement, and often never reach fruition. Furthermore, most of the cost involved isn’t software-specific – it is spent on governance, data model rationalization and integration.

There is a better way to manage these projects. It is not through unification but through a federated strategy.

A federated solution pushes the processing of data down toward the processing system layer. Each system manages the data that drives the business and then reports up to a central repository.

In a federated model, the endpoints are the masters, with the hub serving as a guide pointing to where the data can be found. In a centralized system, the central view of the pristine data gets pushed or propagated throughout the enterprise. In the federated model, the system of record determines the product characteristics and reports to a centralized aggregator.

The centralized view assumes that there is only one proper view of the data held centrally, while federation assumes that processing systems are right. Centralizing can remove data integrity from the business, creating the most to lose if the data is wrong, while giving control of it to data groups that care about cleanliness but have nothing personally at stake for getting it right.

Centralization also requires very tight integration between processing and data-management platforms, creating very expensive enterprise projects to modify virtually all organizational systems to seamlessly obtain, vet and compare enterprise data to each and every processing system.

Even with federation, we can’t completely sever the tie between local and enterprise information. There are times when conflicts arise and the multiple versions of the truth need to be reconciled. So, while federated information does need to flow between local systems and a centralized repository, there does need to be logic to reconcile, highlight, track and balance conflicts in near real time to ensure that operating groups have not gone rogue.

Giving users control of their data can cause conflict and increase management complexity of federated data strategies. But once you consider the cost and lack of a success history of centralized data projects, it makes sense to start thinking of a better way where businesses can manage infrastructure; managers can get a better view of risk; audit and control can more easily track conflict; and senior management can focus on the company -- instead of investing hundreds of millions of dollars with an uncertain and (most likely) negative return.  

Spotlight-white-trans For more stories in the High-Speed Data and OTC Markets Spotlight Series click here.

Comments | Post a Comment

3 Comments to "Big Data, Small Governance: Why Federated Data Strategies Will Win":
  • Comment_mcbride
    kennymcb

    22 October 2012

    Larry - We go from one extreme to the other, the main issue is firms always chasing new business, in areas in which they are not dominant or their "core" business. With the market building from an "opportunistic" foundation there is never the desire to heavily invest into a robust infrastructure - and by the time the revenue reaches an level where is demands the attention of a central body, it usually has its own nuances and "back of the envelope architecture" designed by the traders needs rather than the institutons. The data issue will only continue, especially as we continue to expand the cross pollination of trading assets and new products. This problem will never go away, it will just continue to evolve, as any ROI can not be fully justified.

  • Comment_nick_pic_01
    nj.citisoft

    22 October 2012

    Larry, can I keep a foot in each camp here?  I fully take your point that processing systems all have their own peculiarities; but increasingly there is a need for standardization for reporting terms.  For example, there seems to be little advantage (and much danger) in federating LEI maintenance, or standard security identifier maintenance within an organization when these need to be reported consistently externally.  Also, whatever the product, there are likely to be a number of further widely reported attributes and values that require an organization-wide agreed meaning like ‘current valuation’ or ‘exposure value’ or ‘settlement amount’ (even if these are further qualified as being for a specific purpose and at a particular point in time).

    I am increasingly convinced that a composite and flexible (balanced) solution is required; whereby at least the core business reporting items are defined and their locations are recorded in a metadata layer that combines attributes of an Enterprise Service Bus, and a data dictionary; and access to all of this data is via the metadata layer.  This approach should facilitate access to any ‘defined and described’ data, wherever it is physically located – and the data services approach to data access is helping move us along this route. 

    The metadata approach should then make it possible to accommodate the scenario that kennymcb raises: core data items can (and must?) be identified and recorded in the metadata for reporting purposes – even if the access methods are initially less than elegant.  Then, if volumes and business criticality change, the method of handling the new product can be enhanced, and the method(s) of accessing its attributes can change technically, but from a data user’s perspective – accessing information via the metadata layer - there is no change: except possibly access to more attributes.

  • Missing
    ttfountain

    24 October 2012

    As a Strategy and Technology consultant in several industries I have yet to see a “one approach fits all” solution. As other posts have suggested there are real concerns about losing the “centralized” control over terms and figures that must be reported externally on an Enterprise consistent basis. Balance that with the task of meeting tough deadlines while rationalizing definitions across tens of systems. Faced with these trade-offs I suggest a segmented approach where depending on the  nature of the regulatory requirements, distribution of data and systems across an Enterprise, as well as governance mechanisms, architects  can draw upon a mix of solution approaches to best fit their needs.

    What has been missing is a robust alternative on the “federated” side. Here, a solution designer can work at the metadata level, target the extraction and processing of only the data meeting specific requirements, and do so in a low TCO, highly agile, and rapid time to value way.

    I have been working with a firm named Pneuron who offers a solution platform built from the ground up to serve this federated strategy. In Pneuron’s world, small, fixed functions applications (Pneurons) are distributed out to / nearby source systems. Each Pneuron performs a prescriptive action (e.g. query, match, analyze) within a larger processing “network” and passes its result set to all subscribing Pneurons. This approach eliminates centralized data warehouses (and the massive centralize, normalize, manage issues), offers a single design, deploy, operate platform that incorporates multiple technologies for performance and resiliency and best of all is accessible to Business and IT users via a configuration-driven design studio (no low level coding!). They are not replacing the database but they are focusing on a large class of business problems – Risk, Regulations, CRM, Operational productivity – that have been traditionally hindered by presupposing that these issues can only be solved by a data centralization and integration project.

    By providing choice (but containing cost and complexity), Architects can best align their solution designs with the business requirements and optimize time, cost, and complexity within a larger solution space. I have included links to Pneuron’s overall website and their ongoing blog which has commentary about Pneuron’s technology, TCO, and business alignment. Its very cool indeed.

    www.pneuron.com

    http://www.pneuron.com/The-Blog-for-Intelligently-Connecting-the-Cloud-the-Business-and-Big-Data/

You must log in to comment.