Alan McIntyre, Senior Regulatory Reporting Specialist at Kaizen Reporting, offers five recommendations to help turn the extra time into a successful testing programme
The implementation of the rewrite of the swap data reporting rules has been delayed by the CFTC until 5th December 2022. The widely anticipated delay and the extra six months it grants are likely to be welcomed by most reporting firms. The below covers the background and what we recommend firms do with this windfall extra time to improve their testing and the subsequent quality of the go-live.
Speculation of a postponement has been growing for months with some arguing that the CFTC only finalising the Part 43 & 45 Technical Specification in October meant May 2022 was no longer realistic and others debating just how ‘final’ the October Tech Spec really is.
Viewed in the context of the impending verification and notification requirements to ‘find it, fix it or tell us why the heck not’, (I might be paraphrasing slightly there) this delay is an opportunity to identify and remediate reporting errors before they become notifiable to the CFTC.
Let’s remind ourselves of the verification and notification requirements coming as part of the CFTC ReWrite:
- to perform “verification that swap data is complete and accurate”
- to “keep a log of each verification that it performs” and the remediation of each error detected
- to “correct any error as soon as technologically practicable after discovery of the error. In all cases, errors shall be corrected within seven business days after discovery”
- to provide “Notification of failure to timely correct” where errors cannot be corrected “within seven business days after discovery”, to “the Director of the Division of Market Oversight”
The Commission’s intentions here are clear. They want to ensure high data quality through firms monitoring the accuracy, fixing the issues quickly and more to the point getting the reporting right the first time. The CFTC don’t want loads of notifications to deal with. They want the threat of the notifications to motivate the firms to improve their reporting solutions and controls frameworks. That said, the NFA may well have more enthusiasm for the notification reports when they come to doing their examinations and ask to see proof of how the reporting errors were resolved.
It’s been 6 long years since the CFTC started the ReWrite process with the 2015 Request for Comment. The CFTC expects the highest levels of data quality and are clearly in no mood to tolerate excuses or sloppy reporting. Especially after being forced into delaying the deadline.
Armed with an extra six months here are Kaizen Reporting’s top five recommendations for a successful testing programme to mitigate against reporting issues and reduce your exposure to those dreaded notifications.
- It’s all about the data
The delayed production date means that the UAT dates at the Swap Data Repository (SDR) are also delayed accordingly. But this does not mean firms should delay their testing. On the contrary this is primarily a data driven project, so switch focus to testing the actual data.
Here we need to differentiate between testing your reporting solution and testing your data. Forget the reporting solution and instead query, assess, interrogate, and test the data itself. Does the source data itself stand up to close scrutiny?
Test the relationships between the various data items. Test the reference data. Does each internal identifier map to the correct LEI. Does each entity listed have a relationship with each CCP, Broker etc? Do the financials make sense and are they captured consistently across the different systems?
Poor data is one of the most common causes of reporting errors. So, test the underlying data independently of the efforts to test the reporting solution. Then when your reporting solution is complete and the SDR opens their UAT environments you can test the submission process with confidence in the actual data itself and use these test results to inform your submission testing.
- It’s not complete until it’s complete
The verification/notification requirements concern the data being “complete and accurate”.
Complete here has two vectors. The most obvious concerns whether the correct population has been identified and successfully reported to the SDR. Problems with determining the eligibility of a transaction for reporting or issues transmitting the data can result in both under and over reporting. And if the wrong population is present in the SDR then the data is not complete.
But completeness also concerns whether all the relevant fields are populated and gives the regulators the full story they require. The CFTC Technical spec outlines where certain fields are mandatory, conditionally mandatory, optional or not required. However, this is the bare minimum and it’s the optional category that contains the sting in the tail because the definition includes, “shall be reported, if applicable”. In other words, those optional fields are not optional at all. They are mandatory where applicable and the responsibility sits with each firm to determine that applicability.
Given the complexity of the data set at a field-by-field level and the many challenges determining the exact trade population to be reported we highly recommend that firms give serious time to testing completeness.
- Don’t just mark your own homework
If the people that built the reporting solution are the same people responsible for the testing and controls, then you are exposed to the risk that any wrong interpretations will persist across all three. Firms need to think about a structure and environment where the original assumptions, interpretations and implementations can be tested and challenged.
One of the founding principles behind Kaizen Reporting was the clear and obvious need for truly independent testing. Marking your own homework is not recommended for two main reasons. Mistaken understanding is very unlikely to be uncovered because that’s the prevailing understanding. And you don’t know, what you don’t know. In other words, if you are not aware of an issue then you are very unlikely to think about checking that issue.
- Keeping the test data real
Manufactured test data, simulated scenarios and negative testing are all essential tools within the testing arsenal. I’ve lost count of the number of times I’ve seeded testcases with UTI = ‘ABC1234’, ‘DEF1234’ and so forth. But there comes a time within the project where you must test with actual data. Either production data lifted from your production systems, or ‘production-like’ data that whilst anonymised is still a close enough representation of your actual trading data.
You need to push substantial amounts of real (prod / prod-like) data through your reporting solution and into the SDR because this data will help you uncover scenarios and issues that would have otherwise been missed with manufactured data.
- Rinse and repeat until it’s sparkling
Regardless of how good, thorough, intrusive, comprehensive, intelligent, or well thought-out your testing teams and systems are, it’s important to keep on testing. Even in the unlikely situation where by some miracle of synergy your data, reporting systems and controls all achieve perfect reporting, things change and those changes cause issues. Upstream systems have code releases, data gets changed, data gets transformed, the reporting solution itself has a release that fixes one thing and breaks something unexpected. I cannot emphasize this enough. Keep on testing.
It can be painful as the more you test the more issues you’ll likely find. It’s like peeling an onion where each layer reveals something new. But hey, it’s better to cry those onion tears during the windfall bonus extra testing time than when completing the “failure to timely correct” notifications to the CFTC.