Cross-System Data Consistency: Improving Data Quality for Business Critical Data Flows
Why should you care about data quality? Haven’t you done enough already? Isn’t data quality so, so twentieth-century? Companies have spent fortunes on expensive ERP, CRM, SCM, and a plethora of other specialized systems, without providing adequate care and feeding of the underlying data that drives every modern business. Bad data causes these systems and the business processes that they support to perform poorly. Hence, the rise of the ubiquitous master data management project — to solve that problem.
Bad Data = Bad Business
As an example, bad data (such as incomplete or incorrect customer records) can prevent an order from being fulfilled, causing the loss of a sale. Levi Strauss is a recent example of the end result of bad data that caused the malfunctioning of their new ERP system and translated into a 98% drop in net income.
Another recent and very prominent example of bad data causing business problems is the much maligned mortgage origination and securitization process that has wreaked havoc in the global financial system. Following the data trail from the actual mortgages to the mortgage-backed securities to the ultimate owners of those securities has become so blurred that understanding the ownership, the value and risk of those securities is almost impossible. And in fact, there have been news stories highlighting how banks were having problems foreclosing on delinquent mortgages because they could not establish that they were the clear owners of the mortgages. This is yet another example of inaccurate and incomplete data having a real world business impact.
When you think about data quality, what do you think about? Industry analysts and thought leaders tend to focus on two aspects of data quality: completeness and data accuracy. However, this narrow focus misses the mark on a much more significant data quality issue: data consistency across systems. The consistency of data across a critical business process, such as your supply chain, order processing, anti-money laundering, or risk management process is probably one of your most difficult data quality challenges. These complex processes do not draw data from a single application or data source. They pull data from multiple specialized systems, all of which have a different purpose and twist on the information.
The issue of data consistency has come to the forefront because organizations and their underlying IT infrastructure have evolved into highly distributed and specialized networks of software applications, computers, and databases. The emergent data quality issues are manifested in a number of different ways – Do any of these sound familiar?
- Emergency bug fixes implemented where application developers use a previously unused column and don’t document the change.
- A change in schema of one system that is not reflected in other applications that share the same data.
- Application users who need features to support new business processes invent their own special codes and use previously unused or seldom-used fields in the application to indicate some alternative meaning which is not fully documented.
- Fast-changing business requirements drive the evolution of individual software applications which in turn have uneven ripple effects throughout the networked IT infrastructure.
For companies to have complete and accurate control of their business processes, they must first determine which sources contain the information that most accurately represents the state of their operations. Whichever sources are ultimately identified as the trusted sources, companies must be able to measure the quality and consistency of data as it moves into and out of these trusted sources. They must also be able to audit and remediate these data flows on an ongoing basis.
Visit this article to read about the analysis and operational phases of ensuring data consistency, and to read case studies of companies who have solved this problem.
admin @ July 30, 2008