Survey design considerations

Edited by Maurice FitzGerald – Drawing by Peter FitzGerald

There are too many surveys in the world. Be selective. You don’t have unlimited resources. Nor do your customers. Don’t waste their time or yours. Here are some considerations that should help.

Don’t ask what you already know

You will use your customer database to determine who to contact by phone or email. You already know who they are, so don’t ask them their names. You already know what company they work for and what size it is, so don’t ask. Use the information you have to demonstrate that you remember the customer. This is less common than you might expect. I have even been asked for my email address in a survey sent to me by email. Yes, I suppose you could get to a level of creepy stalking, and that should be avoided.

Challenging when you subcontract

When you ask a third party to run your feedback process for you, it is usual that the confidentiality agreements you have in place with your customers prevent you from transferring all customer data to the subcontractor. If you think this is simple, talk to your company lawyers. To be able to transfer confidential data to a third party, the third party has to be acting as your legal “agent”. This agency status automatically includes a number of other attributes, making it rare in practice. The consequence of all this is that the company executing the feedback process for you often has to ask the customers for information you already have, wasting the customers’ time. You should be able to avoid this by providing the subcontractor with a unique code for each client, then matching the code back to your full set of customer data once you have the results. Not easy, but doable.

Tricky vendor relationships

Survey service providers often work on a convincing pitch saying that customers will provide more honest answers to a “neutral” third party. There are no studies that support this point of view, and I can’t see why that would be the case. I strongly believe you should design and run your own surveys. A third party will never be able to understand the necessary product and service vocabulary as well as you and your customers. There is one exception, and it is competitive benchmark surveys, which must be run by a neutral third party.

Try different things

If, for example, you have thousands of customers for a particular product and want to solicit their feedback and improvement suggestions via email, randomly try different formulations of the email to see which gets the the best response rate. Carefully test and measure the effectiveness of different survey questions. Test any new customer experience process in one business before implementing it everywhere. Experiment, experiment, experiment.

Keep it short

Keeping surveys short affects two things in a positive way: response rates and completion rates.

Response rates

There are many different factors that affect response rates, and survey length is not at the top of the list. The main thing that affects response rates is your relationship with the people on your contact list. If you write an email in your name to customers you know personally, you will get a relatively high response rate, no matter what the length of the survey. If you are trying to survey people who use a competing product and have never bought anything from your company, response rates are likely to be low. They have already proven they they don’t care much about you, so why should they waste their time? There is no good, recent academic research on the relationship between the number of questions on a survey and the proportion of people who answer it, everything else being equal. It should be quite easy for a large company with a lot of customers to run an A/B test, asking half of a large mailing list to answer a different survey from the other half. For the time being, you will have to rely on common sense and just trust that shorter surveys help.

Completion rates

For email based surveys, SurveyMonkey has done an outstanding piece of work to determine the effect of adding a question on what are called drop-out rates. A drop-out is a person who responds to at least one question, but does not complete the survey. SurveyMonkey looked at 2,000 surveys with one question, 2,000 with two questions, and so on all the way up to 2,000 surveys with 50 questions. The graph below shows what they found. Personally, I was surprised that so few people failed to complete the longest surveys. Still, the message remains that the shorter the survey is, the better the result.

Fig. 1 - Result of SurveyMonkey research on relationship between survey length and dropout rates
Fig. 1 – Result of SurveyMonkey research on relationship between survey length and dropout rates

It is tempting to keep surveys very short to improve completion rates and avoid wasting customers’ time. However, I have yet to see a survey where demographics did not make a difference. Sometimes the differences are surprising.

What we mean by Demographics

Demographic information is that which you use to categorize responses. Again, you should never ask people to provide the categories if you already have the information. For web-based surveys, the categories you don’t already know about are normally provided using pull-down menus. Examples are things like these:

  • For product surveys, responses may vary depending on whether the respondent is a purchasing decision-maker, an influencer or an end user. Self-categorization.
  • For employee surveys, whether the person works from home, the office or is always on the road may matter. Perhaps oddly, many employee surveys seem to suggest that people who work from home are happier.
  • For a survey about a software product, it is useful to know what version of the software the customer is using. People using older versions of software may provide improvement suggestions that have already been implemented in the more recent versions. It is then easy to provide instant feedback and make them happy.

The consistent demographic surprise

Even in B2B environments, women often provide different input to men. You should be able to do the demographic split well enough using people’s names. In HP’s research on consumer printers, women mentioned ease of use and overall quality twice as often as men as a reason for recommending the printer. Men mentioned the price of supplies, low ink consumption and value for money twice as often as women. In short, the survey showed that women care about different things than men. It changed the messaging and marketing we used, as well as the training given to people in retail stores.

An example from a student survey

A Swiss business school did a simple NPS survey among its students. Once again, women and men scored quite differently depending on where they lived. Women living at home were far happier than male students living at home. This was not expected. It was pure chance that the survey owner decided to look at gender as a factor.

Fig. 2 - Student survey example
Fig. 2 – Student survey example

I suppose one other entertaining factoid from this survey is worth mentioning. While the beauty of the campus, with amazing views of the Alps and Lake Geneva was in the top three reasons students liked the school, half as many Swiss people mentioned it as French students. Too blasé, I suppose.

Action orientation

If you don’t already have the demographic information but it is of no conceivable use in an action plan, please don’t ask. Let’s suppose you want to know what should be improved in your commercial air conditioning products. Asking the respondents what age they are is of no conceivable relevance.

An important factor in surveys is asking questions that let the customer provide feedback on what they think is important, not what you think is important.

The British Airways counter-example

As a frequent flyer, I have answered many airline surveys over the years. The British Airways survey process is the worst I have ever seen in any industry from any company. I am confident they believe it is wonderful, or they would have changed it by now. It certainly gives them plenty of  metrics to use to beat up their teams. Here is a screen capture from one survey. These are the top-level summaries. The cabin crew section, for example, has sixteen questions.

British Airways survey example
British Airways survey example

The main issues with these types of surveys are:

  1. If you are the company who is running the survey, you will tend to believe that each one of the 100+ questions is equally important.
  2. No distinction is made between things that are hygiene factors and things that give you a competitive advantage. Hygiene factors just have to be “good enough”, and it is a waste of resources to make them better. While it is not an airline example, Rob Markey likes to use credit card approvals in stores as an example. You expect it to work. It cannot delight you. You won’t notice it at all unless something goes wrong.
  3. There is no screen where you are directly asked what you want British Airways to improve. That is the only conceivable way to decide which of these many questions actually matter.
  4. The number of questions means some people won’t start the survey at all, and there will be slightly more drop-outs than with a shorter survey.
  5. The survey is run by a third party who guarantees anonymity, so there is no way for British Airways to follow up with you.
  6. They do not provide any information about anything they have learned from past surveys or any improvements they have made. Putting aside the lack of possibility of individual follow-up, there is no commitment to tell those surveyed collectively what they have learned.

What should it look like?

If I were designing the survey, here is what it would probably look like:

  • First, the email, personally addressed to me, would read, “Dear Mr. FitzGerald, thank you for using British Airways for your flight from Geneva to London and onwards to Denver on October 22nd. We are trying to improve our long-haul service. We can see that you checked in online, had one bag in the hold that arrived with you, and that you visited our lounge in Heathrow Terminal 5 while you were in transit. Fortunately, your flights were all on time, and the crew did not report any issues with the in-flight entertainment system. We see that you had a special meal and hope you enjoyed it. We would now like to ask you just four questions about your trip on October 22nd. The link below will take you to the secure feedback form. The survey should take you no more than three minutes. I hereby commit to tell you what we have learned from your feedback and that of other long-haul customers by November 30th at the latest. Now, please help us to improve.” OK, this example may cross the ‘creepy’ line, but my point is simple: don’t ask things you should already know. An airline asking about a particular flight should already know whether it was on time or not, for example.
  • The survey questions would be:
    • How likely are you to recommend British Airways to a colleague or friend? [On a 0 to 10 scale]
    • Have your flights from Geneva to Denver on October 22nd made you more or less likely to recommend BA?
    • Why?
    • What should we improve?

I can’t think why they would find it useful to ask anything else. If I don’t mention the food and drink on the flight, they cannot have mattered much to me. Note that the email is highly personalized and uses data the airline already has. I think the reason airlines often ask for the data they already have is that confidentiality rules mean they cannot provide it to a third party. Another great reason to run your own surveys.

Your comments are welcome below. Please sign up for our newsletter to stay up to date with blog posts and other news.