Melissa Cites the Single Customer View as Vital to Customer-Centric Marketing

Melissa Team | 2021, Customer Centricity, Customer Experience, Customer Identities, Customers, Data Enrichment, Deduping, Demographics, KYC, Press Release

Comprehensive customer data powers personalization at scale, eliminating the knowledge gap

Rancho Santa Margarita, CALIF – May 26, 2021 – Melissa, a leading provider of global data quality and address management solutions, today announced strategies for marketers to eliminate the knowledge gap in customer data, enabling personalization at scale. With aggregated data from disparate resources, Melissa’s tools and services resolve customer identity and enhance profiles by connecting to and between marketing channels such as email, social, direct, and in-store options. Marketers gain access to a single view of the customer, allowing clear, comprehensive insight into customer behavior and preferences. 

Read more “Melissa Cites the Single Customer View as Vital to Customer-Centric Marketing”

Customer Interactions

Blog Administrator | Address Quality, Analyzing Data Quality, Customer Centricity, Data Quality | ,

By David Loshin

In my last set of posts, I suggested that organizations reconsider the scope of the concept of the “customer” and how redefining the relationship between the organization and a customer. More to the point, I wanted to begin to explore how managing the different aspects of the customer relationship can enhance customer centricity, improve the customer experience, and eventually lead to increased profitability.

I summarized with the suggestion that you consider every interaction with any entity (individual or organization) in which there is an exchange of value as a customer interaction, and that is the topic of this week’s post.

That suggestion hinges upon two core concepts. The first is that one can effectively identify the scenarios within any business process where two entities interact and there is an identifiable exchange of value. The second is that one can describe and quantify what that exchange of value is.

We can begin with an assessment of the business functions that traditionally are associated with customer interactions and their processes. For example, the marketing function seeks to attract and engage prospects, while the sales function looks to convert prospects into committed purchasers.

There may be a fulfillment function tasked with delivery of the purchased product or service, the finance function to collect payments, and a customer service function to deal with inquiries and complaints. Each of these business functions has some interaction with customers; the challenge is to identify (and document, if necessary) the business processes and then specify where in the process the customer interaction occurs.

Those customer interactions will be the focal point of our next series of postings. Next we will consider the exchange of value, which frames the point of the interaction, and then we will look at the information about the customer that can be used to increase the value of the interaction.





Managing Customer Connectivity

Blog Administrator | Analyzing Data, Analyzing Data Quality, Customer Centricity, Data Management, Data Quality | , , , , , , , , ,

By David Loshin

At the end of our last entry, we had come to the conclusion that standardization of potentially variant data values was a key activator for evaluating record similarity when looking to group customer records together based on any set of characteristic attributes. From an operational standpoint, this activity is supported using data quality tools that can parse and standardize data.

But the process must go beyond the purchase and use of the tools. For any
customer centricity program in which connectivity is relevant, there are going
to be multiple dimensions of connectivity employed in business decisions. We can
immediately fall back on my original example of the “household” grouping, and
depending on the objectives for customer outreach and experience, other groups
will be overlaid with each other.

Here is a clear example that builds on my post from a few weeks back. We
originally suggested that the household was relevant for mobile telephone
companies looking to expand residential customer commitment though increased
product sales and service contracts within the household, since one
decision-maker might be responsible for adding new lines and services.

That same mobile telephone company might also look at their business-to-business
relationships and look to expand their footprint among business customers,
suggesting a new grouping of customers based on their employer.

Overlaying the households and the corporate customers would provide a picture of
companies existing brand predispositions among the employees. Identifying the
key corporate decision makers and offering combined business and residential
account discounts might be a good way to exploit knowledge of overlapping
connected groups.

The result is that the analysis not only depends on good quality data, it
assumes that good processes are in place for managing the hierarchy data that
maps individuals into groups – an example of what could be called metadata
quality. Keeping hierarchies of concepts, data attributes, and mappings among
individuals based on those hierarchical attributes (and of course, similarity
scoring for linkage!) is a valuable skill, one that we will revisit in upcoming
series of posts…




Centricity and Connections: Clearing the Air

Blog Administrator | Address Quality, Analyzing Data, Customer Centricity, Data Quality, Record Linkage | , , , ,

By David Loshin

There are opportunities for adjusting your strategy for customer centricity based on understanding the grouping relationships that bind individuals together (either tightly or loosely). And in the last post, we looked at some examples in which linking customer records into groups was straightforward when the values to be compared and weighted for similarity are exact matches. When the values are not exact, it introduces some level of doubt into the decision process for including a record into a group.

Let’s revisit our example from my last post by adding in a new record for evaluation:

 

John Hansen, 1824 Polk Ave., Memphis TN 38177
Emily S. Hansen, 1824 Polk Ave., Memphis, TN 38177
Emily Stoddard, 1824 Polk Avenue, Memphis, TN

We had already decided that John and Emily shared a household, but all of a
sudden we have a third record with a name that shares some similarity, with one
of the existing names, and an almost exact street address match (note that the
third record is missing a ZIP code).

We could speculate that “Emily Stoddard” changed her name after she got married
to “John Hansen,” or that she changed an address somewhere as she moved form her
bachelorette pad to their newlywed home. But without exact knowledge of the
facts, it is only speculation, and one must exercise some care when relying on
speculation for business decisions.

If a few small differences pose a challenge to linkage, what would you think of
dozens, or even hundreds of variations for names, locations, or other data
values?

Just as a case in point: in a hallway conversation at the recent Data Governance
Conference, a colleague mentioned that one of his customers’ databases had over
one hundred variations for a certain big-box retailer’s name! The conclusion you
can draw from this is that a key part of the record linkage process involves
some traditional data quality tactics, namely appending a standardized version
of the data to help your linkage algorithms score record similarity as a prelude
to establishing connectivity.

Share

Customer Centricity and Connections: Establishing the Link

Blog Administrator | Customer Centricity, Data Management, Data Quality, Record Linkage | , ,

By David Loshin

In my last post, we began to look at the value proposition for grouping individual customers into logical groupings. We began by looking at a grouping that generally appears naturally, namely the traditional residential household.

We talked about householding in a previous blog posting, but it is worth
reviewing the basic approaches used for determining that a group of individuals
share a household. The general approach is to analyze a collection of data
records and examine sets of identifying attributes for degrees of similarity in
naming and residence locations. Many situations are relatively straightforward,
such as this example:


John Hansen, 1824 Polk Ave., Memphis TN 38177
Emily S. Hansen, 1824 Polk Ave., Memphis, TN 38177

In this example, two individuals share both a last name and a location address,
and although the data evidence does not guarantee truth of the inference, it
might be reasonable to suggest that because there is a link between the family
name and the residence location, these two individuals are members of the same
household. The algorithm, then, is to link records into a collection of similar
records based on similarity of the surname and residence characteristics.

However, the concept of grouping is not limited to conventional groups, since
there are many artificial groups formed as a result of shared interests or
similarities in profile criteria. For example, people interested in certain
sports car models often organize “fan clubs,” new mothers often organize toddler
play groups, and sports team fans are often rabid about their franchise
alliances.

In turn, your company might want to create marketing campaigns that target sets
of individuals grouped together by demographic or psychographic attributes. In
these cases, you would adjust your algorithms to link records based on
similarity of the values in other sets of data attributes.

Establishing the link goes beyond looking at the data that already exists in
your data set. Rather, you may need to append additional data acquired from
alternate sources.

And, interestingly enough, you will need to connect the acquired data to your
existing data, and that requires yet another record linkage effort. Apparently,
understanding customer collectives is pretty dependent on record linkage. And
while linking records is straightforward when all the data values line up
nicely, as you might suspect, there are some curious intricacies of linkage in
the presence of data with questionable quality.