3 Questions to Ask–and 3 Ways to Improve Your Surveys Today!

3 Questions to Ask–and 3 Ways to Improve Your Surveys Today!

By Martha Brooke, Chief Analyst: Interaction Metrics

At the dealer where I used to have my car serviced, every oil change or repair was followed by a text saying:

You’ll be receiving a survey. It would mean a lot to me if you complete it and give me a top score.

This tactic of asking for a particular score is annoying, prevalent, and how NOT to do a survey.

Pushing for particular answers is known as gaming, and it adds up to bogus data. More on this in a minute.

Bad Customer Surveys are Everywhere

Last year, I wrote about how Whole Foods asks customers to take their survey in an unproductive way. I also examined an Alaska Airlines survey that, with 94 questions, put the fatigue in survey fatigue. I’ve dissected surveys from Ace Hardware to Kaiser Permanente and found bad customer surveys everywhere. In fact, they appear to be a multi-billion-dollar industry.

Let’s Imagine a Better World with Better Surveys

Let’s imagine a better world in which companies ask relevant questions in meaningful ways.

Companies would only use surveys to interact authentically with their customers. They would put a kibosh on leading questions and never bug customers with rote inquiries. They would treat surveys as the pursuit of objective, actionable truths.

You Can Have a Good Customer Survey

Years ago, my company came up with a 20-point checklist for the surveys we develop for clients. We continue to use this list every single day. Feel free to reach out if you want the complete list, but here are my three favorite checks written as questions with improvement ideas.

Question #1: How representative is your data?

Misrepresentative data happens when your survey data only comes from certain kinds of customers and doesn’t represent your customers at large. For example, your data might be coming from those with lots of free time, a specific gripe, or those who provided their email addresses.

With misrepresentative data, your survey data omits one or more customer groups—or your survey sample is too small to give you reliable facts.

IMPROVE: Examine who your incoming data comes from. Perhaps you sell to five verticals, but only one vertical consistently takes your survey.

In this case, the fix is to slightly incentivize customers of those other verticals. Or, you can use proactive methods like customer interviews to reach out to the pockets of customers who are failing to respond.

Question #2: Is there gaming?
Gaming is when associates only survey customers they believe had positive experiences.

In addition, they might ask customers to answer the survey in exchange for implied or explicit favors. For instance, perhaps you were haggling over the cost of a big-ticket item, and an associate said they could come down in price for a good review. Garbage in. Garbage out.

IMPROVE: Take associates out of the equation. They should never have anything to do with who gets your survey. And they certainly shouldn’t be using it as a negotiation tool!

Question #3: Do you ask Leading Questions?
Leading questions prompt customers for the answers you want to hear.

For example, “How satisfied were you?” assumes the customer was somewhat satisfied. “How likely are you to recommend us?” assumes the customer is somewhat likely to recommend.

IMPROVE: The best way to rid your survey of leading questions is to get a fresh set of eyes on your customer survey. If you can’t bring in another company, have another department in your company take your survey.

Have your testers mark any areas where that they felt directed in some way.

Incidentally, there is one kind of leading question that’s beneficial, and that’s one that indicates you think you can improve. In most cases, that’s a realistic perspective to have.

Bottom line: there are a lot of bad customer surveys in the world, but there is a better way. Start by ensuring your data is representative, taking your survey out of the hands of associates, and eliminating leading questions.

I love talking about survey design and the customer experience, so drop me a line. I’ll let you know how many respondents you need for a statistically-valid survey or answer just about anything else about the customer experience that’s on your mind!

Toward a better world with better surveys!

Martha Brooke founded Interaction Metrics, a customer listening agency in 2004. She writes about how to raise the bar on customer feedback programs here.

Related Posts

Time to Resolve

Time to Resolution (TTR) is a metric to measure the elapsed time it takes for Support to resolve a case. This article introduces the definition and inputs required to measure TTR.

Do you have ideas for using AI?

According to Svetlana Sicular of Gartner the reason why AI initiatives are rolling out so slowly, is the lack of ideas. Françoise Tourniaire of FTWorks offers insights and ideas for applying AI to Support.

How to Define and Measure Deflection

Using an accurate measure of deflection is imperative. If not measured correctly it is easy to overstate the impact of self-help and service automation on assisted support demand. The average rate of case deflection within the technology industry is 23%. For some companies deflecting 23% of the assisted support demand is extraordinary, while for other companies there is considerable room for improvement.