By Elliot King
Sometimes the idea of quality can be more concrete and even measured. In the 1990s, thread count became the quality standard for high quality bed sheets. Six hundred threads per square inch–great. Three hundred threads–not so good.
There are serious problems with both approaches. In the case of judgment, people tend to differ. James Joyce’s Ulysses was named by Modern Library as the best book of the 20th century. Most of us can’t read or understand it. The critics knew what they were talking about. We didn’t. As for the quality of bed sheets, sometimes a 300-thread sheet actually feels better than a 1000-thread sheet. A host of other factors must be taken into account such as fiber quality, yarn size and finishing.
Ascertaining data quality faces the same kinds of choices and challenges. On the one hand, people often make an overall judgment about the quality of their data. It is good enough to get the job done. Or at least, it is good enough that the problems poor data quality might cause are not readily apparent. On the other hand, many organizations prudently invest in measuring their data against defined standards. If the data is substandard, the organizations work to remediate it.
In both cases, data quality is in the eyes of the beholder. In the first, even if a problem is not apparent, it still can exist. Actually, unseen problems are often the most pernicious. Measuring data quality according to defined standards is not a panacea either. Just because 99 percent of your data is accurate, let’s say, doesn’t mean it is high quality. It just means that it is 99 percent accurate, however you define accuracy.
In addition to focusing on the idea of “quality” itself, organizations may want to look at their business processes. In an email campaign, for example, how many messages are bounced? Data should be thought of as a resource. The goal is to make sure that the resource adequately supports the processes that need it.