I admit it: I have a lot of research pet peeves because I look at a lot of surveys. But there’s one that gets me every. single. time.
You start taking a survey because you want to to be helpful (or add to your swipe file of survey questions that work or don’t work). Then you run across one or more questions you can’t answer.
Maybe it’s a question you don’t have insight into (how would I know what percentage of my organization’s marketing budget is spent on social media?). Or maybe you’re asked about a topic that isn’t relevant to you. Whatever the case, there’s no answer option that works for you.
Adding to the frustration: The question is required. And there’s no way to indicate you don’t have an answer and/or you want to skip this one.
At this point, one of two things happens:
- You make your best guess (which skews the data because your answer is meaningless)
- You exit the survey (which reduces the number of people who complete the survey)
If you want to avoid frustrating your survey takers and increase the number of people who complete your survey, the solution is simple: include an option that gives your survey taker an out.
Common answer options include:
- I don’t know
- This is not part of my job
It’s an easy pet peeve to avoid . . . but there is a bit of a danger with the “unsure” response as well.
You need to decide how you want to deal with the “unsure” answers when analyzing your data.
You have two options.
Your first option is including the “unsure” responses in your data. This works well if it’s insightful to understand the uncertainty people have about a certain topic. For example: you really do want to know how many people aren’t certain if they will look for a new job in the coming year.
Alternatively, you can remove those who selected “unsure” and recalculate the data. For example: Let’s say 1,000 people answered your survey. Of those, 100 people answered “unsure.” If you remove those responses, the percentages for the remaining answer options would now be based on 900 respondents instead of 1,000. And that will change the numbers.)
My default used to be including those who reported they are unsure. But recently I went through the process of re-calculating a bunch of data on a recent client project when they decided they didn’t want the “unsure” data to be included in this year – or any of the previous years we were analyzing. (It was a painful but useful lesson.)
I’ve since added a step in my process to review those questions that offer an “unsure” option with the client. Together we decide if we’ll include or remove those responses in the calculations.
This new process is saving a lot of time in the data analysis phase, as I no longer need to rework data (and rerun countless crosstab reports, re-proof the data, etc.).