About Us
Our Approach
Contact Us

Accelerate regulatory compliance with lean information management

Accelerate regulatory compliance with lean information management

  |   Blog



Following the global financial crisis the business cases and mandates many data practitioners previously only dreamt of began to emerge in the form of a regulatory tsunami. In response to demands for transparency, reducing complexity and systemic risk a plethora of regulations from most G20 regulatory authorities began to emerge.

Financial Services organisations jumped on the bandwagon created by Silicon Valley and the Dot Coms and began creating Chief Data Officers (CDOs) of their own. Disciplines such as Data Quality Management, Data Modelling, Data Governance and Data Architecture suddenly received the remit and funding they had long craved. Many of the regulatory stipulations and requirements however focused on data storage, data provision and transparency – trade repositories, reporting standards and other related requirements. The focus was on the data itself rather than the underlying processes, architectures and people responsible for managing the data.


The focus on the ‘what’ shifted to the ‘how’ in January 2013 however when the Basel Committee on Banking Supervision published its Principles for Effective Risk Data Aggregation and Reporting.  Known variously as PERDARR, RDA and in Europe most commonly as BCBS 239, the regulators’ main assertion was that “Improving banks’ ability to aggregate risk data will improve their resolvability.”


In response the Basel Committee developed eleven principles covering a range of technical, cultural and organisational requirements for banks to implement prior to January 2016. Many data practitioners rejoiced when the principles were first published as they reinforced sentiments long held within the data community such as defining data ownership and stewardship, treating data as an asset and understanding the lineage or provenance of key data elements. The business case of a lifetime had landed.


Initially banks responded by forming project teams, new organisational units such as CDO offices and hiring specialist staff. Many extended and refocused existing Data Architecture and Data Management departments already implementing Data Governance programmes and other strategic data initiatives.


It is now safe to say however that the dream has become a nightmare for many. In a survey of banks by Ernst & Young,  all respondents suggested at least 25% of their BCBS239 related change programmes would not be delivered by the deadline in January next year1.


A commonly cited reason why many of the banks are struggling to meet the deadlines is because the scope of BCBS 239 is substantial and the implementation timeframes ambitious. Many of the requirements are not new within the data community and indeed many organisations with highly mature data strategies and data architectures do exist. Few however would claim to have implemented these architectures, working practices and cultural changes in less than three years. Usually the investment and continuity of staff and approach is required over many more years and implemented in less complex environments.


Applying an Agile and Lean approach


Agile and Lean Information Management techniques is one area where a great deal of buzz and excitement is being generated. These approaches use frequent iterations, a build-measure-learn cycle, multi-skilled practitioners, cross functional teams and extensive prototyping to minimise the waste inherent in traditional Data Management approaches. Traditional sequential and waterfall approaches tend to feature extensive use of specialists, numerous touch points / hand-offs and well segregated responsibilities. This works well when requirements are well defined and relatively static and the number of interdependencies between disciplines is clear and minimal. For large data change projects in unchartered territory however, such as BCBS 239, often a more pragmatic approach yields greater value – particularly given the highly interconnected nature of requirements.


Lean Information Management works by unifying many of the above touchpoints and roles using Virtual Teams, Communities of Practice or Competency Centres.  Chiefly this is because there are significant efficiencies in using common tools and working closely through challenges such as reverse engineering, profiling, defect measurement and report prototyping simultaneously. BCBS 239 presents the perfect use case for this given many of the tasks in the diagram are required to thoroughly understand the underlying quality / governance issues, transformations, aggregations, lineage and architecture of any single risk report. Using a cutting edge toolset and approach can enable these tasks to be carried out much more rapidly and effectively.


Technology can be used in this capacity as an accelerator to this lean approach and can yield significant results. Take the example of demonstrating a deep, empirical understanding of a complex risk report. Traditionally this would involve many different touchpoints across teams and many different tools with a final polished product only available at the end of the project following extensive testing. By using a data management platform you can build a Build-Measure-Learn cycle that allows cross-functional data professionals to more efficiently tackle these tasks. Once a firm understanding or hypothesis has been formed using prototyping functionality the same analysts can then simulate what a final risk report would look like for Subject Matter Experts to provide feedback far earlier in the process than a traditional waterfall approach would enable.


This living, breathing prototype can then be used as a clearly understandable blueprint for delivering sustainable solutions and adapting to regulatory requirements as they clarify and evolve.


It’s never too late to launch your rescue package and trial a new approach, particularly when the regulatory drivers present a paradigm change for how your organisation manages its data assets.


If you are interested in discussing your regulatory data challenges further then we shall be hosting a roundtable with Experian Data Quality on 8th September at Eight Members Club, Moorgate. Click here to register. 

Read More

Taxonomies – the most under-rated yet critical component of a Data Strategy?

  |   Blog

Originally posted on Linkedin Pulse.


For many practitioners implementing data strategies there is a long list of priorities to work through before reaching the backlog item named “Optimise taxonomies”. For many it’s not even on the list. Burning platforms tend to be things sponsors and stakeholders can more easily relate to – poor quality data, excessive time spent finding relevant data, an inability to gain insights from data and so on.  Data Modelling and Semantics requirements in general often receive little attention so its unsurprising that highly specific areas such as Taxonomy Management are often neglected.


Taxonomies tend to be associated more with academia and science than profit-seeking organisations and often are an easy target for those wishing to keep the ‘navel gazers’ quiet.  This is somewhat unfair however as most knowledge workers in fact encounter taxonomies a surprising amount in their day to day work.  Even if they are not always called taxonomies.


Whether it’s the Data Management team assigning industrial sectors to the company database or the MIS team generating performance reports using customer groupings and product ranges – taxonomies feature more than you think. It’s a natural way in which the human brain organises complex information. Indeed more frequently than not ineffectively managed taxonomies are also a key source of pain for senior managers and C-suite executives too. How often have you heard your CXO’s grumble that comparing sales, costs, margins and risk data across divisions is next to impossible?  A large part of this is down to data quality and definition issues arising from poor taxonomy management.


Fortunately optimising your organisation’s taxonomies and leveraging them in rich analysis, search and reporting is actually easier than many would think. It doesn’t have to be a long-term, intensive upfront modelling endeavour consuming lots of resources and involving woolly conversations about what a product or customer is. Using the latest metadata discovery, profiling and taxonomy management tools such as our partner Poolparty it’s surprising how rapidly your taxonomies can be turned from an inconvenience into an asset.

Read More
Data To Value Manta Flowchart

Using Graph databases for data lineage

  |   Blog

Graph databases have come of age and are being applied to many use cases outside of traditional requirements now such as infrastructure analysis and social networking.  Optimised for storing relationships between things, or nodes as they are often called, many companies are unaware of the scope to apply this technology to their day to day problems.  Many areas once viewed as very challenging to model using traditional tools such as Data Lineage, Enterprise Architecture and dependency analysis are now much easier to model and analyse. This enables rich reporting for business stakeholders underpinned by robust and empirical analysis.


One area we have had great success is analysing data lineage, particularly within complex Data Warehouse and Data Integration stacks where often the linkages between data sources and dashboard elements / report columns can be buried beneath transformations, mappings and flows. Our software partner Manta Tools (powered by Titan DB) enables users to rapidly understand the relationships hidden in their Oracle, Teradata and Informatica stacks. This saves a great deal of time and cost when maintaining your Data Warehouse and implementing changes or upgrades.


Please do get in touch if you would like to know more about how this latest generation of graph tools can help your organisation.

Read More