Subscribe to DSC Newsletter

Dashboard Confessions: The Sins of Satisfaction

Client Satisfaction: Smoke and Mirrors, or Honest Mistakes?

Industry Co-Authors: Glenn RossDonald Wedding

Dashboard Confessions is a series of blog posts geared toward business leaders in this community who are interested in analytic topics that impact their KPI’s, but who do not necessarily have a background in analytics.

Customer satisfaction research can be great for business, and can create win/win situations for emerging firms and their clients. Executives who grow their business units by the numbers are often left scratching their heads as to why other top line customer behavioral KPIs don’t trend with their satisfaction metrics, even when offset to adjust for a time lag.  

When we investigate relationships between KPIs, one reason we don’t find any relationships in some research applications can be due to two concepts that experienced research professionals manage as a best practice. These two concepts are known as reliability and validity. These apply to all KPIs, but are more commonplace when the measures are things that are less tangible. So counting dollars from a cash register has a different set of issues compared to measuring satisfaction.  

A reliable measure is consistent over time. So if someone goes on a diet and the scale is consistently ten pounds off in the same direction, then the relationship between the diet and weight can still be recognized, but the true weight is not accurately recorded. However, since the numbers are good for relationships, one might be tempted to conclude they are accurate. There are other forms of reliability expressed with different examples. 

A valid measure actually measures what it claims to measure. So, using the scale example, the weight measures can be reliable, but since it is not accurate, it is not valid measure of kilograms. Even though it may measure weight, it is not accurately reporting weight in kilograms. Again, the cumulative effect of an invalid measure can be substantial. These are the kinds of issues that research and quantitative professionals actively manage on a day-to-day basis. There are other forms of validity expressed with different examples.

Since data and research professionals view themselves as advocates for executives and shareholders, they are faced with the task of negotiating fixes for measures that are beyond their domain in an organization, yet are present in enterprise reporting. These are often lengthy endeavors in large organizations, but it is a mission critical responsibility regardless of where wayward metrics are coming from.

In a business climate where top line revenue is growing and all indicators point towards immanent dominance of market share, executives might be tempted to use other KPIs as a proxy for client satisfaction. Other executives in a similar situation might choose to reinvest in their profits because they understand the value of long-term relationships and customer lifetime value. These measures are particularly mission critical for firms who refuse to compete on price but rather adopt a price+service value proposition in hopes of capturing a market niche.

Popular solutions for measuring client satisfaction range from complex methods known as structural equation models that explore underlying dynamics of satisfaction to focusing on the answer to a carefully worded question used to calculate what is known as the Net Promoter Score.

These are not the only unique options for client satisfaction research, but no matter what option is chosen, there are four common issues that can turn an earnest effort to measure client satisfaction into a series of red herrings for executives.

Ironically, these red herrings to the quality of the research can be often found embedded in contextual information from research results of existing resources who might not qualify the cues. So let’s examine these contextual cues and qualify them. 

Cue #1:

The survey was sent to a random sample of clients, so the results represent the opinions and attitudes of the customer base. This is a big one.

Not Always True.

We can send surveys to clients randomly, but those who respond to surveys differ from those who do not respond to surveys. Differences exist, even if they cannot be measured on a database.

How can we prove this to in house research staff? A data-mining professional can fit a model using customer profile and transaction behavior. If no model can be fitted, that would likely explain why the satisfaction KPI does not trend with other customer behavior KPIs, so it’s back to ground zero with a new method in some cases.

If a model can be fitted, then the data miner can score the customer base that was not surveyed and that did not respond to the survey. If the scores are different then you can reliably conclude that the survey does not represent the customer base and at least qualify the results to management with guidance of your independent research partner.  Or you can just use a different sampling methodology.

Cue #2:

The survey sample size is large; therefore the results must be accurate.

This is a popular notion due to how survey results are reported in the press using specific scientific methods. This is a result of a carefully executed measurement process that manages the problems from Cue#1 by research professionals. A large sample size does stabilize the results when adding a new survey responder to the sample, but is of little value if the results are intended apply to the entire customer base but were not sampled properly.    

Cue#3:

The research method is backed by industry and academic research.  

Not always true. This only applies if the researcher follows the exact methods from the research. Minor adaptions in industrial research methods are not replications of existing research.

For example the Net Promoter Score mentioned earlier is derived from a scale from 0-10, so this eleven options for a person to choose from. Some firms have adopted a scale of 1-10. This may seem like a minor alteration, but since the scale is converted to categories, the removal of the zero gives a survey responder one less option to denote dissatisfaction. The likely result is that the Net Promoter Score is artificially inflated. Either way; comparison of survey results from two different scales to industry benchmarks is an invalid maneuver for your research partner to watch out for.

Cue #4:

Using panels instantly solves all of these problems.

False. Market research panels are used to survey people that meet a certain demographic profile. This approach is profoundly useful but does not solve all the problems from the previous cues without serious consideration of an experienced market research professional. Panels have specific applications.

Use the tips from this post when doing Net Promotor Amplification

The good news is that there are innovative approaches to managing these issues and that will keep your executive dashboard calibrated so that major investment decisions are supported by metrics that have been thoroughly vetted by measurement experts. In addition, there is an emerging trend to measure customer-facing processes as part of process engineering. Some Fortune 500 companies use methods, such as Six Sigma, to engineer, simulate, and measure customer-facing processes to add more contextual information to your executive dashboard. 

Questions, comments, and smart remarks will help the authors reach out to junior executives on these topics, but of course ,the beloved input from the AnalyticBridge community will be cited .

 

 

Citations:

Random Public Domain Image:

http://www.public-domain-image.com/computer-arts-public-domain-imag...

Views: 721

Tags: dashboard, kpi, nps, reliability, sampling, satisfaction, validity

Comment

You need to be a member of AnalyticBridge to add comments!

Join AnalyticBridge

On Data Science Central

© 2020   TechTarget, Inc.   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service