Posts

5 Master Data Management Myths You Can Stop Believing

There are many exciting topics under the Data Management umbrella, but if I had to choose one that resonates with me the most, I’d go with Master Data Management (MDM). Why? Because  MDM programs can enable any organization to realize the value of one the most crucial assets: data.

Master Data Management is a hot topic in the IT industry. However, many organizations are on the fence about implementing it due to the wide impact of changes and a relatively high rate of MDM project failures. Yet, despite this, developing a comprehensive MDM strategy should be a priority for modern, data-driven organizations.

In my two decades of work in the data management space, I’ve noticed that even though MDM has gained considerable traction, isn’t always well understood. In this post, I’ll share and hopefully debunk the most common myths that surround it.

But before diving in, let’s review the fundamentals of MDM.

Master Data Management is a framework that allows organizations to generate uniquely identifiable, business critical data. This data is often referred to as an “entity”. In essence, MDM makes corporate data an integrated, harmonious whole by continuously bringing together source data, assessing its quality and ironing out the inconsistencies to solve data-related business problems.

Now that we established what MDM is, let’s explore what it isn’t.

Myth #1: Master Data Management is a software.

Too often, we see that MDM is perceived as a software solution, when it really is a framework. Unfortunately, no software can handle the entire MDM framework right out of the box. Many vendors will pitch their product as the ultimate, holistic system, but what they don’t tell you is that MDM software is just an accelerator.

Of course, there is an undisputed value in MDM software, especially when it comes to simplifying and expediting certain elements of the master data management program such as Identity Resolution, Automation, Survivorship, and Remediation. However, approaching vendors to find what is available on the market shouldn’t be the first step when planning an MDM program.

While tools can certainly help or hinder MDM efforts, a successful MDM implementation is not made or broken by a tool. The real key to effective MDM lays in identifying fundamental elements of the program and carefully designing the implementation roadmap. Planning early on will increase the probability of MDM implementation success and will help avoid unnecessary software spend.

Myth #2: MDM can be done in silo

Organizational silos and resulting data silos are generally not conducive to effective data management operations.

Let’s take a financial department of a large manufacturing enterprise. This department takes in information from a couple of different financial systems which leads to duplicated and inconsistent data. Organization’s CFO decides to run MDM solely for the finance department focusing specifically on the “client” entity. As a result, this newly implemented MDM solution generates unique client records within the financial systems. It all looks good until a client contacts the company to change his address. Although the change is promptly reflected in the CRM, the financial systems remain untouched. Even though the client received his product, the invoice never arrives since it was sent to the old address.

Successful MDM requires that we track our chosen data entity across the ENTIRE organization, without exceptions. If, as per above example, client data is in 80% of departments, then we need to incorporate ALL of them into the solution.

Myth #3: Master Data Management is expensive

Because MDM is typically an enterprise scale project, it’s automatically associated with large investments and significant effort. As with any large-scale project, the strain on corporate resources is hard to predict and can go well beyond the initial estimates. In the presence of multiple risk factors (broad scope, high impact of changes, technology that’s new to the organization), MDM projects can be loaded with uncertainty. However, there are ways to mitigate the risks and pave way for success. One of the best ways to do it, is by phasing MDM implementation based on criticality of data entities and their prevalence across the organization. This approach helps companies validate the program with a small budget, while still offering value to the organization.

For example, rather than focusing on a major data entity that’s widespread across the organization, companies can drastically reduce the risk by starting with a “smaller” and less critical entity. For many organizations, “employee” is a good test entity to use as a proof of concept. Not only is it less prevalent across the enterprise than much more critical entities such as “customer” or “product”, but it also offers a good starting point for further program expansion.

A gradual, iterative approach to MDM alleviates the risk of failure, improves adoption and sets the program up for success when rolled out on a large scale.

Myth #4: MDM can be successful without Data Quality

In my experience, enterprises are very confident in Data Quality across their systems, until that’s put to the test. In most cases, Data Quality is lacking, and the organization is rarely fully aware of it. There are two main reasons why this happens:

  1. There is no such thing as perfect data. Data Quality rules are based on current business needs and subjective decisions about quality of data, therefore it is hard to achieve and maintain a state where data is perfect.
  2. Data is constantly changing, and live data tends to bring inconsistencies. So, data quality is only a threshold of at which the quality reaches an acceptable level.

Before embarking on an MDM implementation, it’s vital to address critical data discrepancies between existing systems. This can mean, for example, standardizing the way you specify state or province names across all systems (i.e. picking between an abbreviated name “AB” or a full name “Alberta”). Without proper Data Quality, records will remain separate and potentially create false-negative scenarios.

Myth #5: MDM is best left to IT

Assigning MDM implementation to IT might seem like an obvious choice. After all, the project will deal with both data and software systems. And while IT department is one of the major MDM stakeholders, and responsible for technical implementation, MDM is also about solving specific business problems. When IT is assigned to establish MDM, business tends to get less involved. This means that even though technology is handled, business needs may not be fully addressed. This can lead to low MDM adoption.

Due to wide scope and reach of MDM, the program should be centrally organized, overseen by a committee and implemented by a well represented, cross-functional team across the entire organization.

Are you ready for MDM?

Each year, the amount of data enterprises gather and produce increases exponentially. But without a structured organizational approach to data management, not only does the data diminish in value but also poses liability risks when mishandled. MDM can do a lot more for an organization than just reduce these risks – it can actively help realize the full potential of data by establishing an enterprise-wide state of data clarity.

Curious about Master Data Management, but don’t quite know where to start? Contact us today and we’ll walk you through the steps for adapting the framework in your organization.

Spark Some Joy and Declutter your Business Intelligence

It’s human nature to hoard stuff. I, for example, have a habit of buying tools specific to the hobby project that I’m working on. My behaviour usually follows the same pattern: I’m in a midst of a challenging task when I realize that a specialized tool would make my job easier. I then do a quick run to the local Home Depot to pick up what’s needed. When the task is complete, I find the right storage space for my new purchase. But despite of trying to stay on top of what I own, I’m periodically confronted with a garage full of gadgets that serve a similar purpose.

Most often, one of these two things happens when we’re faced with clutter: we lose track of what we have and resort to using only the things from the top of the pile; or, like me, we keep getting new tools, only to realize that we own a lot of similar items that take up space but add little value.

The same principles apply to business intelligence and analytics. Most organizations that have BI programs in place, at some point struggle with duplicate reports, messy BI storage or just plain old lack of regular planning.

Luckily, as with other types of mess, BI can too be organized.

In fact, organizations that make a conscious effort to maintain order and declutter from time to time, reap the benefits of timely analytics that meet business needs without sacrificing performance.

Decluttering Business Intelligence

Setting up a successful Business Intelligence (BI) program requires planning. In the early stages of getting business analytics off the ground, we tend to spend a lot of time deliberating the size, scope and scale of the program. However, once the Business Intelligence program is up and running, new problems emerge making organizations question their original approach. Just as in keeping a tidy house, the key to an efficient BI program lies in an ongoing rationalization and decluttering of various BI components.

How to tell if my Business Intelligence needs reorganizing?

If:

  • You’ve got business intelligence reports or visualizations, but they don’t fully address your current needs. You find yourself creating completely new reports based on new data sets.
  • You’ve got many data sets in a centralized location, but you aren’t entirely sure where to find the right data.
  • You’re doubling or tripling the effort and/or data.

.. you might be experiencing the common symptoms of a cluttered BI.

On the surface, the above issues seem easy to address. Create a couple of catalogs, decommission old BI elements and the BI platform will be clean. Unfortunately, it is rarely that easy…

Consider this example:

Suppose you have a BI report which shows which products were sold in different regions and at what price. To get this report, source data was extracted from several siloed systems in each of your regions including production systems, financial systems, and ERP systems. After extraction, your data was cleansed, transformed and consolidated in the dimensional Enterprise Data Warehouse. Then, the required dimensions and facts were extracted to a Data Mart and your report was created.

However, the market situation has changed, and you need to analyse how different types of clients consume your product. You reach out to the Data Analysts from the business unit, but they have a hard time locating required industry information in the consolidated BI storage. When you turn to the team responsible for maintaining BI storage, they offer little insight into the data content. You realize that there are no data owners since data sets in the BI Storage are all based on consolidated data. Now, to get the results you’re looking for, you will need to start a new project, extract the entire data set again and create a new report. Often, at the very end of it, you will realize you’ve created a duplicate data set.

Take steps to organize your Business Intelligence

It’s easy to imagine what your business intelligence will look like after repetitively generating BI products without addressing the underlaying issues. At some point, BI rationalization (decluttering) will be required.

While there are many variations of BI implementations from an organizational and technical standpoint, at a high level the fundamental elements and data flow process remain the same:

Process of Organizing Business Intelligence

1. Start by reviewing your Data Sources

We often see that one of the leading causes of BI clutter is ingesting more data from the sources than what’s needed. To get organized, begin by identifying the current and future use cases for your business intelligence. For starters, every extracted data set should be associated with a Data Owner (also called Data Steward or Data Custodian) who can provide insights into this data set.

Once the data is extracted from its sources, it’s often transformed, normalized (restructured and modeled) and merged or joined based on the BI Data Architecture. This is a point where things can go awry. To avoid future complications, always opt to retain metadata lineage which could track the data back to its sources.

2. Take a good look at your BI Storage

Business requirements and data consumption performance are the two major drivers that affect decisions on how data is originally joined, merged, and structured. However, in virtually all cases, business needs will change over time. When they do, the newly created BI products (dashboards, reports) and added sources of data will affect the structure of your BI storage sometimes leading to unnecessary complexity and poor performance.

There are several ways to reduce this complexity, which all boil down to planning and organizing the storage regardless of whether you’re using dimensional or non-dimensional data stores for your program. It’s self-evident that modeling is the most important clutter limiting measure for dimensional databases, as it enables future development and assures ultimate performance. However, what less often understood is that non-dimensional BI stores also require a good dose of organizing. Any non-dimensional BI storage can benefit from clearly defining, maintaining and enabling all data types and linking them back to their sources. By also associating data with a specific owner and documenting it in a catalogue you can ensure that your BI storage will be ready for further analysis.

3. Analyse your BI Consumption

Numerous organizations struggle with efficient intelligence analysis because their BI products don’t meet the changing organizational needs. As the enterprise grows and evolves, its dashboards and reports should follow suit. One of the best ways to ensure that the insights you receive are relevant and timely is by cataloguing BI products and attaching them to current business requirements.

You will notice that when data lineage is preserved and ready to handle the full BI data lifecycle, decluttering will become simpler and a lot more efficient.

4. Consider the organizational approach to Business Intelligence

As the last step, take a look at the big picture. For some companies, the source of BI clutter may be rooted in organizational approach rather than the BI process itself. For example, when BI elements are handled by several distinct groups within the organization, there is a significant risk that the approach to data isn’t always consistent.

In other organizations, what’s needed is a revamped project methodology. For instance, BI processes, strategy, and delivery are rarely effective when following the waterfall approach. Conversely, agile methodology can offer greater benefits, especially when paired with a defined Information Strategy and governance.

Finally, remember that you’re not alone. In fact, the industry has been struggling with disorganized BI for a long time. What helps set the leaders on track is a holistic, organization-wide change that comes with Implementation of Architecture Practices and defined governance that enable timely, efficient and structured BI.

Do you see any red flags in your BI program? Give us a call. Together we can ensure that your organization has the right insights for accurate decision-making.