There may not be a ‘typical’ FpML implementation project but Christian Nentwich of Message Automation explains the main components and common challenges inherent in the implementation of this market standard.
More than ten years have now passed since FpML was launched as a standard. That a fair number of financial institutions have yet to implement it is testament to its complexity, to wide variations in IT capability, but also to the fundamental nature of the OTC derivatives market – one that has traditionally valued flexibility over standardisation, places a premium on competition, and has sustained a high rate of product innovation.
Against this background, operation managers and CIOs who wanted a simple solution to electronic trade messaging – “put it all in FpML and paper will be unnecessary” - have sometimes voiced disappointment with FpML’s progress. That seems unfair and is probably the result of early over-selling. On the whole, the standard is starting to look fairly successful given the environment it operates in: it has enabled the larger houses to radically cut down point-to-point messaging, reducing time to market and development cost; it is the cornerstone of services like Deriv/SERV, MarkitWire, SwiftNet FpML and many yet to come; and it has, though this is rarely perceived, aided substantially in knowledge transfer from innovative sell-side firms to the buy side.
If we have learned anything in the last ten years, it is in fact this: there is no “typical” FpML implementation project. “We will implement FpML” is equivalent to “we will implement XML”: a statement of intent, but an ambiguous one at best. FpML remains the equivalent of a Swiss army knife, except that the price tag comes with several zeroes at the end. Here are some more concrete aims, in all of which FpML is the preferred choice:
• Standardise internal trade representations to get away from system-specific ones, to aid in reconciliation and reporting, and to align with market practice
• Fully automate communication with services, counterparties or third parties like custodians or administrators to increase STP rates
• Standardise internal message flows – coupled with proprietary extensions – to cut down the number of point to point connections
It is important to recognise that these goals, while they do have some commonality, will result in quite different levels of complexity and that the costs - and hence the required business cases - will vary. Message Automation has been involved in FpML implementation projects for at least seven years, seen a fair few run over budget and others fail altogether. The main reasons seem to be:
• Unclear objectives; you have to be clear about the upper limit of what can be achieved. By and large, if the top-level goal has “FpML” in it, it is probably wrong
• Underestimating the amount of analysis necessary to do the work properly
• Lack of specialist skills
• Late-stage technology failures or development failures caused by inadequate tooling or architectures
• Using the wrong outsourcing strategy
None of these are unavoidable. We will take a look at some of the issues commonly encountered, and how they can be mitigated.
Standardising Trade Representations
Standardising trade representations means getting away from system-centric or department-centric models. We will not go into why one might want to do that, but simply assume that the business has already identified a good reason.
It makes sense to implement FpML for this purpose. FpML is, after all, modelled around standard ISDA agreements. Improvements in representation will translate into fewer confirmation errors or other post-trade breaks, which makes it easier to reconcile trade inventories (since systems will now use the same format) and provides agility to the business when it comes to connecting out.
Any project aimed at implementing FpML properly, rather than as a format to dump data into (more on that below) will have to plan for the following difficulties:
|Booking Differences||• FpML is a parametric representation, and very prescriptive; systems that are heavily cash-flow based may not fit
• Often encountered on the buy side: FpML models trade contracts, not listed instruments. That sounds academic, but can cause issues when reusing legacy workflows and reporting systems, leading to requirements for dummy instrument, dummy holdings, etc.
• Before an implementation can happen, a certain amount of consistency in trade booking is required; if traders/middle office can book arbitrary structures, delay the FpML project and address this first by partitioning the trade population
• Resist the temptation to use FpML to carry arbitrary trades just because the names of some parts of the structure “sound similar”
|Specific Data Issues|| • FpML carries unadjusted dates almost exclusively. Many off-the-shelf systems store only adjusted dates. It is a violation of the standard to “squeeze” such dates into FpML. This innocuous looking detail is more akin to an iceberg and can trip up projects
o Although it is painful, standardising date adjustment logic across multiple systems does help to reduce risk
• For interest rate instruments in particular, FpML has no “simple” representation. The same model is used to carry everything, from a vanilla swap to a corridor swap with customised terms; there is a steep learning curve
• FpML has many mandatory fields because of its background in automating confirmations. If data is not available, such fields have to be defaulted
• FpML uses enumerations extensively. Gap analysis is necessary for each of them, and values that do not fit are difficult to carry in FpML for technical reasons in XML Schema
|Gaps in FpML Standardisation|| FpML stops short of standardising some fairly common structures. This can come as a surprise. Some examples:
• Should zero-coupon swaps be modelled as a floating leg and a payment, or a floating and fixed leg; if the latter, should they be modelled as an (initial) notional and rate, or can the rate be zeroed out and a final payment added?
• How to model rate legs that change from fixed to floating?
As a consequence of these issues, project planners should avoid estimating project time by the complexity of the product type to be implemented or by the number of data attributes. The time-tried 80/20 rule holds very much here: most of the implementation will be easy. Estimates should be driven by a detailed gap analysis of the few difficult issues that are likely to come up.
Key factors for successful standardisation include, in order of priority:
• Detailed knowledge of the systems involved
• Knowledge of the FpML standard – a lot of time can be spent figuring out where to look!
• Good understanding of booking processes/conventions followed by the business
• Communication! Either you find people who represent the last three items in one person, or three experts need to have clear communication channels, and be available throughout the project
• In IT department: strong data architecture, some business knowledge, detailed knowledge of XML Schema and XML processing
• Absence of booking discipline
• Inability or unwillingness to push required changes – e.g. requirements to capture additional data - back into systems or operational processes
• Propensity to follow a waterfall process of development. You will not get this right first time
• Outsourcing the core of the project to non-specialist companies; these projects succeed or fail with business analysis, not the number of people involved
• Old technology that cannot handle XML
External communication using FpML shares many of the tasks involved in internal standardisation projects, but any project will also be faced with some quite different issues.
The first thing to realise is that any external service provider or counterparty will impose a significant number of additional rules regarding correct usage of the standard. FpML is a relatively open standard, and is typically heavily restricted by services, with constraints governing everything from reference data, to trade structure and limitations on the underlyers that may be referenced. Be prepared to follow manuals running into several hundreds of pages to achieve compliance.
What seems at first to be a problem can, however, be a blessing in disguise: the heavily locked down communication mechanisms that service providers use mean that there is less guess-work to do when implementing connections, and that it is much easier to come across off-the-shelf software that solves all or part of the connectivity problem. This is not the case with standardisation projects: there is no “FpML-in-a-box” that helps with generic implementations.
Connections to service providers do carry the additional overhead of having to align business processes as well as data. This requires additional up-front planning and leads to further requirements in IT capability, mainly in being able to implement and rapidly change XML-based message flows.
Standardising Internal Message Flows
FpML can also be used as a tool to make internal messaging more efficient and reduce costs in IT, rather than addressing any operational requirements.
The case for replacing point-to-point connections with a neutral communication layer, hence reducing n2 connections to n, is one of the most frequently made business cases in the industry. To succeed, the following must be true:
• There must be many systems wishing to communicate; if the number of systems is small, the cost of the implementation will be far greater than the benefit
• The systems must not be logically coupled to such an extent that the project is pointless (e.g. if there are ten systems, but they communicate as five pairs)
• The incremental cost of bringing in new systems must be shown to be decreasing
If the messages in question are OTC derivatives trades, then using FpML as the neutral format, supplemented with wrappers, makes sense. However, in our experience, institutions have found it much harder in these implementations to justify the cost of full compliance one would expect from a project driven by standardisation requirements. Correspondingly:
• Firms should consider using portions of FpML as “building blocks” for their own standards in such projects. To clarify, this means copying portions of FpML into an internal standard.
* Avoid the temptation of implementing full FpML if it cannot be done properly. If you turn it into a part of your own standard, you will provide a snapshot of how the business records its trades; the alternative, shoe-horning things into FpML, will only have the opposite effect and mask this
• Business cases are more likely to be made around avoiding the expense of designing your own standard than operational efficiency/risk
Finally – Technology
Most businesses will at some point have to decide whether to buy technology to help them with some or all of their FpML projects.
It should be clear from the above that since the standard can be used for so many diverse ends, there is also no single tool that will “do FpML”. The area that comes closest to that is external connectivity, and it is worth looking for off-the-shelf software for that. Otherwise, it is probably best to treat FpML like any other XML standard and to take care that in-house or external software meets its particular requirements:
• Transformation: FpML makes heavy use of structure. It does not lend itself well to flattening or “good old” tools based on the relational data model. Native XML tools will do better.
• Flow: messaging/workflow/orchestration engines must be able to address complex XML structures in decision logic.
• In general: support for tricky XML quirks like substitution groups, type overrides and mixed namespaces is vital.
The complexity of FpML and the wide variety of tasks it can be used for can be daunting. They also make it very hard to come up with “cut and dried” recipes for project success.
If a selection of any three messages could be taken away from this article, it should perhaps be these: focus on delivering business benefit, not on implementing the standard; realise that this is at the complex end of things and out of date processes and technology will fall over; and take a risk-based approach to any project, focussing on the difficult parts.
Christian has been involved in creating cutting-edge software and bringing it to customers for the last eight years. He is currently continuing his quest at Message Automation, where he is Director of Strategy, at Model Two Zero, and on the advisory board of Satalia, a recent university spin-out. Christian combines deep technical knowledge and a research-driven attitude with an understanding of the financial markets - in particular OTC derivatives. He founded the FpML Validation Working Group and subsequently chaired it for several years, and has helped a laundry list of sell- and buy-side firms come to grips with the standard. Christian holds a PhD and BSc Hons in Computer Science from University College London (UCL).
You Might Also Like...
- TriOptima and SwapClear Include First Client-cleared Trades in triReduce Swap Compression Cycle
- DTCC's Data Products Service Gathers Momentum; Adds Liquidity Coverage Data
- Overcoming Imposter Syndrome
- FIA Welcomes Proposed Guidance on Central Counterparty Risk and Pushes for more Transparency
- FIA and FIA Japan Sign Formal Affiliation Agreement
- Eurex Supports Market Participants and Regulatory Change with Total Return Futures
- Harmonisation of Critical OTC Derivatives Data Elements (other than UTI and UPI) - Second Batch, Consultative Report Issued by CPMI-IOSCO
- eClerx and FIA Tech Announce Strategic Partnership