No Comments

MDM: Starting Big or Small?

Master Data Management, Starting Out

If we were to be creating a new system from scratch, or developing our business around a monolithic ERP system, this question does not really apply, so to be able to focus on the relevant aspects of this question, let’s concentrate on environments with existing legacy systems with siloed data frameworks. In this environment, we are likely to have abstract conceptual data types that are used in different ways across different lines of business or areas of focus, along with specialized data objects used solely to support specific vertical activities.

In many cases, then, while there is a desire to “go big” with MDM and consolidate all instances of the same concept into a single unified view, the effort involved may exceed the appetite of those doling out the budget. This scenario recurs across the case study landscape, with the premise that one should start out small on the MDM program, either concentrating on one small data set (perhaps reference data) used in the organization, or (and this is a more likely situation) on consolidating one master data object type from a subset of the business applications. This can be contrasted to “going big” by selecting a single master data object type, seeking out all applications that create, access, or modify instances of that master data object type, and formulating the master environment in your architecture of choice.

MDM might be motivated by tactical drivers or strategic drivers. If MDM is being driven purely by operational efficiencies, then there are likely to be quantitative measures for evaluating whether operating expenses are reduced as a byproduct of the unified view, and this may ultimately suggest starting small and making incremental changes. If MDM is strategic, then progress may be measured by quantifying organizational change and maturity, and this might suggest that starting small may take away from reaching the long-term goals.

Starting small does have some benefits – there is a limited set of stakeholders from whom buy-in is required, there is not as much need for enterprise-wide standards and collaboration, and consolidating two data sets is fundamentally simpler than consolidating many of them. On the other hand, realize that incremental adjustments require incremental actions, and there are some other expected impacts as well. For example, the more we “consolidate,” the more likely we are to lose critical differentiating information associated with each source application. In addition, continual unification of business rules associated with the creation of data introduces additional specifications and environmental challenges in adjusting existing legacy application to conform to downstream expectations that were never aligned within the specific line of business that is creating that data – this leads to the perception of additional work in the absence of added value, organizational conflict, stonewalling, and other behavioral idiosyncrasies.

Yet, demonstrating incremental value does provide some collateral for additional business buy-in, and that is always a good thing, so the answer to the question is a combination: Plan big and strategically, identify key measurable tactical, operational, and strategic benefits, plan tactical phases that contribute to the strategic end-game, and execute against short-term expectations. An example of doing this might be instead of grabbing two data sets and throwing them against the data integration tools to create a third merged data set, consider planning a master data model that can accommodate the union of all instances of that master object type in the organization, but then migrate a small number of data sets into that master model.

admin @ October 30, 2008

Sorry, the comment form is closed at this time.