How to Design a Survey that Works for Content Marketing Research

Are you undertaking a do-it-yourself survey and feeling stumped at the questionnaire stage? It’s not surprising. In our experience, survey design is the place where people are most likely to trip up. We’ve seen hundreds of errors over the years in survey design–some small and relatively inconsequential, others so serious they undermine the validity of a study’s findings. If you’re an inexperienced survey builder, this is absolutely the spot to take greatest care.

What can go wrong with survey design?

The following what-can-go-wrong list is not designed to be comprehensive, but more a way of showing just how much can go wrong. If I sat here for another hour, I could likely double or triple the length of this list (and regale you with some do-not-as-I-do stories for your entertainment).

Asking a question that’s unclear or confusing: What may be perfectly clear to you after studying an idea for months may be opaque to your respondee. Too often I encounter unclear or poorly worded survey questions—whether due to messy language, poorly defined terms, or some other obfuscation. Even before you formally test your survey within a small group, ask someone to read it and point out the more obvious areas of confusion. (And don’t feel badly about obvious errors… it happens to all of us when we’re too deep into a project.)

Asking a “leading question”: You’re likely familiar with the concept of leading questions from political polling. A leading question nudges respondents toward a particular answer. For example, “Tell us about any problems you had when interacting with customer service” rather than, “Tell us about your experience interacting with customer service.” Leading questions not only produce poor results, they also introduce doubt in your survey-taker’s mind about your credibility/smarts.

Asking questions with an unusable answer: Let’s say you ask, “Do you struggle with talent and budget shortfalls?” A “yes” answer doesn’t doesn’t deliver a clear insight. Does the respondee struggle with talent shortfalls, budget shortfalls or both?

Using poor assumptions: Many data science sins fall in the “poor assumptions” bucket. For example, don’t force people to choose a single answer from a list of choices when more than one answer applies to their situation. Provide an “I don’t know” and/or “not applicable” option if the situation warrants it (hint: it usually does). (Side note: when you test your survey, if too many people choose “I don’t know” or “not applicable,” you may want to revisit your question/answer.)

Asking questions your audience doesn’t have the information to answer: I often encounter surveys in which respondents are asked about compensation information for members of their team–information they don’t have access to. The same is true when asking about budgets, future planning, or corporate strategy. This is a spot where an “I don’t know” option is critical if you decide to ask one of these questions.

Asking poorly defined questions: The classic mistake is asking a question like, “What is your marketing budget?” Each person who answers the question will include or exclude different areas. Is PR part of marketing? Is external ad spending included? It’s an impossible question to apply across a diverse group of respondents, meaning your “insight” is all but useless unless you are very specific. (One alternative: asking ‘directional’ questions, such as “Will your budget increase / decrease / or stay the same next year?”)

Asking too many questions: People are short on time, and if they grow frustrated by the length or quality of your survey, they’ll be running out the door. I often hear marketers say, “Well if we’re going to spend all this time and money on a research project, let’s ask as many questions as I can!” Unfortunately the opposite is true: Try your very best to ask as few questions as is possible to tell a great story from the research.

Going overboard on question types: Just because your survey tools offers a dozen or more question types, does not mean you should avail yourself of all of them. Likert scales are popular for determining a range (e.g. “On a scale of 1 to 5 …). The data produced by likert scales, however, can be challenging for beginners to clean and analyze. (Are people choosing “3” because they are neutral, or because they don’t understand your question?) And some question types are hard for respondents to understand and use. If you’re new to survey design, keep your question types as simple as possible.

Not considering the story you’ll tell: Without question this is the biggest error of all–asking questions without a clear sense of how you’ll put the responses to use. (I’ll cover this one in more detail below.)

Making these types of mistakes may seem like a small thing (i.e. “So I ask the wrong question in number 8?! We’ll just leave it off the final report!”). The problem is that often a poorly designed instrument makes your company look bad and leads to frustration. And it can cause your audience to abandon the survey before they’ve finished.

Consider the case of GitLab, which recently posted a survey for its developer community to complete. Before long, users were complaining via Twitter that the answer sets provided were incomplete or inaccurate, and the survey instrument wasn’t loading properly. This is not the way you want to kick off your survey project as it’s not going to convince your internal team that you’ve got this under control.

Avoiding beginner mistakes

Let’s talk about some things you can do to avoid these types of problems. Remember, you’ll get better at this as you go, but these are your “must do the first time” steps to get your research project running smoothly.

Know your “why”: Do you have a clear sense of why you are conducting the research? What story do you hope to tell — and how is it different from what’s currently out there? Is this a line of inquiry your audience really cares about? Struggles with? Will the results be in high demand? And are you sure the results will credible to your audience –not just a punchline that says “buy my product!?” All of these things should be clear and well-researched before you begin.

Research what’s out there: Are you developing a survey about a specific angle or idea in your industry? What research already exists? Was it well-received? What can you learn from what has already been published? Can you refine it? Improve it? Make sure you are well-informed not just about the topic you’ve chosen, but also the degree of “content competition” your study will face online.

Map out your story … even if it may change: I sometimes notice well-intentioned marketers asking a wide range of questions in their survey, and then deciding on the story they will tell after the results come in. The resulting publication is often a “here is everything we discovered” report that has no through-line or story that sets it apart.

Rather than testing out a wide range of questions, hoping to hit something interesting, narrow your focus. A narrow focus means you can ask more probing questions and possibly uncover truly new ideas and insights. Orbit Media’s research program is a great illustration of this idea. The company offers web design and development, but it chose to break off a very niche topic–blogging effectiveness–to focus on in an annual research study.

Test your questionnaire: It’s essential to test your survey and revise/remove problematic questions before going live. This means asking a small number of people to answer the survey and look for trouble spots. Sometimes testing will uncover simple errors, such as omitting an “I don’t know” option. Sometimes the problem will be much more serious, such as not providing enough context for users to answer a critical question. Your survey tool may offer a testing mode, which allows people to comment on what they don’t like about individuals questions. Testing your survey can iron out problems before they muddy your results or render results unusable.

Move your demographic questions to the end: More often than not, the demographic questions are at the beginning of a survey, but they may be less obnoxious at the end. In a blog post about common questionnaire mistakes, Natalie Fisher, a social research and evaluation consultant based in Australia, writes:

“Respondents can find these types of questions intrusive and aren’t always comfortable answering them on a whim at the beginning of a survey, and it is likely they will not complete the whole thing. They are more likely to volunteer this type of information once they are sure the survey is legitimate and have taken the time to fill it out completely.”

Consider hiring an expert: While you may not be able to afford a data scientist to shepherd you through the entire research process, it may be helpful to get an expert review and opinion of your survey before you test it on a small audience. You’ll want to hire someone who has experience designing surveys, and ideally has experience in your industry.

That said, be careful who you hire. As much as you may want/need expertise, you likely don’t need to be dragged through the data science weeds, helping you “perform an inverse probability of censoring weighting to assess possible bias from non-completers.” Ensure would-be data collaborators understand that your study’s accuracy and credibility is of high-importance, but no one is going to die if results are off by a factor of .001.

Survey design is just one element you need to consider when creating original research. Sign up for our weekly newsletter to get more pointers on what you need to know.