The main customer experience measurement and improvement systems – Part 2 – CSAT / Customer Satisfaction

When you are asked to rate your ‘overall satisfaction’ with something, you have entered the Customer Satisfaction rating system. The most common way of doing this is to ask you to rate satisfaction on a scale from 1 to 5, where 5 means ‘extremely satisfied’. There is a minor challenge in the rating scale, as with all scales that do not contain a zero. A small proportion of the respondents will believe that 1 is the best rating, rather than 5. After all, you want to be number one, don’t you? At the level of your overall brand, the satisfaction metric does not predict revenue as well as the ‘willingness to recommend’ metric used in the Net Promoter System. At least, that is the case for most, but not all industries. In that context, it is worth noting that Enterprise Rent-A-Car is used as an example in The Ultimate Question 2.0, and they discovered that overall satisfaction was a better predictor than willingness to recommend. In general, for surveys about telephone, web and chat support, the Customer Effort Score (to be covered soon in another article) is a better revenue predictor.

“Top box” satisfaction

A common method of communicating results is to talk in terms of ‘top box’ satisfaction or “top-three box”, for example. This means arbitrarily deciding that people who give you the top score, or the top two or three scores on a five-point scale are ’satisfied’. I have not been able to find studies of differences in customer behavior between the different categories or groupings. However, based on Net Promoter System studies, it seems unrealistic that someone who gives you a 3 is actually satisfied. The NPS research would suggest that someone who gives you a 5 is likely to recommend your product or service, and someone who gives you a four does not care much either way. A three would already be a sign of negative views. Note that some companies use seven-point scales.

The principal communication challenge

The lack of a single standard customer satisfaction scale and definition causes communication issues. It is quite easy to find companies that say “92% of our customers are satisfied.” What does that mean? On a five-point scale, which of the five are they counting to get to 92%. Since there is no single definition, anyone explaining the metric has to explain where it comes from, and risks losing time in an argument with someone in the audience who measures it a different way.

CSAT is a metric, not a system

While CSAT is probably the most common satisfaction metric, it is not a system. It is entirely up to you to work out what the number means, what customers actually want you to improve, and how you should go about it.

Composite satisfaction metrics

Some companies use the term ‘customer satisfaction score’ or CSAT, and actually use a different method of calculating it. Predicting customer loyalty, meaning the desire of customers to keep on buying from you, can of course different forms. The more sophisticated satisfaction surveys also include several questions that are not about satisfaction. The most common questions ask customer to use the same type of rating scale to answer questions such as “This company has earned my loyalty”, “How likely are you to repurchase?”, “This company is a leader in its field”, “How likely are you to recommend this company or product to a colleague?” and so on. Several metrics can then be aggregated to form a satisfaction index. Where this is done well and with representative customer samples, it can be an excellent revenue predictor. I will cover some composite metrics in an article proprietary measurement systems.

Lists of things to rate

Satisfaction surveys usually ask you to rate a list of things. The list can be very long. For years, I participated in a survey panel that covers the travel industry. I have received surveys that only have satisfaction rating scales and took more than half an hour to complete. The main problems with lists of things to rate are:

  • It is your list, not the customers’ list. Most commonly, these lists are set up by the way your organization is structured. You want to get some sort of feedback for everyone. This may not correspond to what the customer wants to tell you about. Notably, if a competitor does something that you do not, only asking about what you currently do will prevent the customer giving you that feedback.
  • It creates the illusion that each of the factors you ask about is equally important. It is particularly difficult to undo this subliminal message. Imagine you have 50 things that you ask customers to rate. Imagine further that seven of them have the worst rating possible. Once these ‘red’ items are visible, you will be forced to turn them around, even if they have no importance whatsoever.

Conclusion

The lack of a standard definition of what we mean by ‘Customer Satisfaction’ causes major communication challenges and is the principal argument against using CSAT. However, no matter how you define the metric, its trends relative to the trends of your competitors are still meaningful. If you can find a reliable way of finding out why such trends are happening, you may well have something useful.

As always, I have opinions, usually based on fact. That does not make me correct. Please feel free to comment below. And this article is a slightly edited part of a chapter of one of our four books, in this case Customer Experience – Design & Implementation. Our fourth book So Happy Here came out recently and is available on Amazon. It is a heavily-annotated book of my brother’s business cartoons. Most have appeared in our first three books, in my blog posts or Tweets. This and all of our books make great end of year presents for your colleagues and teams.

Next time: Customer Effort Score