Connect with over 2,000 popular apps and software to improve productivity and automate workflowsSee all integrations
Once you’ve created a survey and collected the data, the job doesn’t stop there. That would be like Usain Bolt calling it quits ten metres out from the finish line.
Even the most perfect survey is useless if you don’t crunch the numbers and analyze your results. How else will you get insights into how you can improve your products, services and your company as a whole?
We understand that survey analysis sounds intimidating. Terms like “cross-tabulation” and “ordinal data” sound more like lines from Star Trek than something real humans say.
But you don’t need to be Spock or have a PhD in mathematics to analyze your survey data. It’s not rocket science - it’s just a touch of regular science mixed with your own brainpower.
We’ve created this guide to demystify the process and help anyone - whether you’re a small business owner or a Silicon Valley tech guru - to draw actionable insights from your survey results and make positive changes to your business.
Before you dive into all that precious data, it’s important to understand the basics of what you’ll be dealing with. Survey responses come in two main forms:
Closed-ended questions are questions that can only be answered by choosing from a select number of options. As a general rule, they take the form of “yes” or “no”, multiple-choice or rating scales like the Likert Scale.
For example, if someone asked “can we watch The Mandalorian tonight?”, that would be a classic closed-ended question. You’ve only got limited options: to say yes or no. (Clearly, you would say yes, it’s a great show).
You would use closed-ended questions to collect what’s called quantitative data - data that is measured in numbers. This data makes your data analysis way easier and is great for things like tracking trends and percentages and categorizing your respondents based on the options they have chosen.
Closed-ended questions don’t delve into the details. That’s where open-ended questions come in.
If you’re after deeper insights then open-ended questions are the way to go. These survey questions can’t be answered in a single word—they require respondents to elaborate on their thoughts, opinions and feelings.
Open-ended questions start with “what”, “how” and “why”, encouraging descriptive answers rather than one-word responses. To keep the theme going, an example of an open-ended question would be, “what do you think of Baby Yoda?”
These questions collect qualitative data - data not measured by numbers. That makes them a great way to add context to statistics and help identify problems through feedback. They help tell the story behind the numbers.
Open-ended questions can be difficult to analyze and take more time to interpret. But don’t let that scare you - they lead to rich, unique data you won’t find elsewhere.
In most cases, they’re best used to supplement close-ended questions, which should take up the majority of your survey questions. (If you’re wondering how to write a good survey, we’ve got a guide to surveys, questionnaires and polls here).
Now you understand what kind of data you’re analyzing, you need to know two things: how to measure your survey questions and what type of statistical analysis you can do with your data.
This is where the four levels of measurement come in. Created in 1946 by a psychologist named Stanley Smith Stevens, they’re the definitive way to capture data from surveys.
The four levels are:
You can think of them kind of like Russian Matryoshka dolls - each has its own features, but also has the attributes of the levels that went before it. Take a look at the infographic below to get a quick snapshot of the differences between the levels.
Nominal scales refer to data that can’t be ranked or ordered. That means they’re used for the kind of data that doesn’t have a quantitative (numerical) value, like different categories.
Think of each value as a label for the information you want to collect. Here’s an example.
See how all of these are gaming platforms, but they can’t be ranked or ordered in a specific way. Each label is individual and doesn’t hold a value you could express with a number.
Sure, you could give Xbox One a value of one and Nintendo Switch a value of six, but if you tried to calculate mean, median or mode, it wouldn’t mean anything.
When you use the nominal scale you’re simply tracking the number of people who selected each option, and finding out which was selected the most.
Nominal data is collected with closed-ended questions, usually through multiple-choice options. It’s one of the most popular (and basic) scales of measurement because it’s easy for survey respondents to answer and yields clear results.
With ordinal scales, there is an order between values. They let you see the order of different values and understand categories that aren’t explained by numbers - think of concepts like happiness, customer satisfaction and even levels of pain.
Let’s take a look at an example of an ordinal scale question.
Notice how the difference between values doesn’t matter? You could spend all day trying to measure the difference between ‘unsatisfied’ and ‘neutral’, but that’d be a waste of time.
With ordinal scales, it’s the order that’s important, not the difference between the options. It is like if someone called you ‘pretty’ and another called you ‘beautiful’. You can’t measure the exact difference, but you know one is better than the other.
Keep in mind that ordinal scales have no “origin of scale”. That means there’s no way to say where it starts or ends, kind of like space or time (which limits its uses for data analysis.)
When it comes to interval scale questions the order and difference both have meaning. It’s a combination of the first two scales. “Interval” refers to the interval - or distance - between two variables.
This scale is used to rank things when you understand the value of each option. Think of things like temperature, income, time or duration - you know which one is higher than the other and by how much.
Check out an example of the interval scale below:
The above question is a basic Net Promoter Score (NPS) survey. It measures how likely a respondent is to recommend a company, product or survey. NPS surveys or customer satisfaction surveys are classic interval scale questions, because they assign values to customer feelings and opinions - two arbitrary measurements.
Interval scales measure the mathematical contrast between options. For example, if you ask someone to select a brand from a list, that’s a nominal scale. But if you ask them to rate a brand from one to ten, that’s an interval scale.
This is the first scale that allows you to do true statistical analysis. You can record median, mode and mean, or delve deeper through cross-tabulation (we’ll get to that later.) Another great way to measure customer attitudes is with Semantic Differential Scale questions.
Last but not least, ratio scales are the bee's knees when it comes to statistical analysis. It’s like in the Power Rangers when they all join together in a giant robot - in other words, it’s an amalgamation of all the previous scales.
Ratio scale questions have an order, set values and differences. They also have a true zero. A starting point; an origin story. For anyone wondering why that’s significant, it means you can calculate the order and distance between units.
That makes datasets more precise and allows you to do things like multiply, divide and find the square root of values. At first glance, it’s difficult to see how interval and ratio scales differ. Let’s take a look at an example to make things a bit clearer:
The key is that with a ratio scale, the difference and ratio between options both have meaning. It doesn’t with interval scales. For example, you can say eight cars is twice as many as four in a garage, or four pizzas are double two pizzas.
The differences may seem small, but ratio scales open up more avenues for customer research. It allows you to do any type of statistical test - from descriptive analysis to standard deviation and calculating mode, median and mean.
(For more on types of data check out our comprehensive guide to data collection methods.)
Start your free 14-day Paperform trial now
Now you have a grasp of question types and measurement scales you’re ready to dive into your data and start extracting insights from your online survey. With that in mind, we’ve broken down successful survey data analysis into six simple steps.
Before you even think about sending out an online survey, you need to ask yourself what it is you’re trying to find out. There has to be one central question you’re trying to answer.
It’s like TV shows. Each episode is about one thing, but overall the show is about something greater. Friends is about six friends who navigate life together; The Walking Dead is about people trying to survive a zombie apocalypse.
What is your survey about? Maybe it’s about your customer experience, or maybe it’s about what the larger population thinks of your brand, or maybe you're trying to gauge interest in your idea for a new yoga business.
Outlining your overarching question helps you find out what your survey is about. Doing that ensures you write relevant research questions and focus on the right data points.
For example, let’s say you held a business conference and wanted to collect customer feedback. With that goal in mind, your overarching question would be “did people enjoy the conference?”
Now you could send out a survey asking that single question (like an NPS survey), or you could send a longer survey type based on that central question. It’s up to you, but by setting the right goal you’re guaranteed to collect data that’s useful to your business.
When there's lots of work to be done sometimes it’s easy to be overwhelmed and not know where to begin. We’ve all been frozen in front of a project without a clue what to do.
That’s why you should start with quantitative data. It’s much easier to compare numbers than it is to delve into long-form responses. Leave the qualitative data - the long answers and written feedback - for last.
Let’s say you ran a survey measuring customer experience at your shoe store. One of the questions asks whether customers plan on returning to your store. Respondents are then asked to choose from “yes”, “no”, or “not sure”, with an option to explain their answers.
Now imagine one of your customers, let’s call her Karen, has selected “no”. Alongside this answer, she has written a three-page essay on the service in your store, with references and footnotes, in a tone of what can only be described as ‘seething rage’.
You could start your data analysis with Karen’s manifesto. But where would that get you? It doesn’t tell you about how anyone else feels - her opinion could be (and likely is) nothing to do with the rest of your customer base.
In contrast, if you look at the quantitative data first you get an instant visualization of your data. If your sample size is 200 (meaning 200 people took your survey), and 142 said they would return to your store, that means 71% of respondents would return to your store.
It gives a much better idea of your store’s popularity than Karen’s passionate feedback. Yet if you looked at her response first, you would get a completely different idea. That’s why you start with quantitative data - it’s the bread and butter of data collection.
Start with the numbers and let the quantitative data give you a clearer idea of what you’re dealing with. Then you can dig into the qualitative responses and start to unravel why.
This is where the rubber meets the road. You’ve set your goals, collected your data and made sense of the main findings. Now it’s time to get deeper insights into your statistics.
Cross-tabulation (“crosstab” for the cool kids) is the key to doing just that. It sounds very complicated, but in simple terms, it’s a method for splitting your data into subgroups to see how different demographics respond.
It’s a simple and effective way to add context to your responses and learn more about key data points. Let’s run through just how it works.
Say you decided to run an online survey asking whether people like rap music.
Naturally, you would split the data into: people who like rap music and people who don’t like rap music. Your responses look something like this:
Now to crosstab the data you would add another variable. In this case you want to find out how feelings about rap music change based on age. So you ask add the survey question:
Once you have these results you can chuck them into an Excel spreadsheet (or take advantage of Paperform's Google Sheets integration workflows). When this is done you’ll be able to study the data and see if you can identify any correlations. It’ll look like this:
You can now see that 87% of people under 18 like rap music, while more than a third of people over 35 don't like it. It’s not hard to imagine the implications this would have if you were doing market research for a rap album.
It’s even simpler to imagine how this kind of information could help your business. Maybe you want to see the age of users who visit your website? Or the gender of customers who tend to buy your products?
By using cross-tabulation to split your data into subgroups, you can get meaningful insights without being a maths wiz. A similar way you can break down your data is by filtering.
Where crosstab allows you to split and compare different subgroups, filtering is for when you want to narrow your focus onto one specific subgroup and forget about the rest.
For example, you could concentrate purely on how women felt about rap music. Then you could run a cross-tabulation to compare women of different ages to see how their opinion differs. Whatever strategy you choose, it's one of the best analysis tools to get to know your customers.
Keep in mind that every time you run a crosstab or filter, you shrink your sample size. If the sample size is too small it might not have ‘statistical significance’, which means you can’t be sure whether the results are accurate or you just got lucky from the sample you used.
Author Dan Simmons put it best when he said that “context is to data, what water is to a dolphin.” In other words - all the percentages and numbers in the world don’t mean a thing without knowing the story surrounding them.
Context helps you find that story. The best way to get context is through benchmarking, which is a method that lets you make sense of your data and figure out what the numbers are really telling you.
To put it simply, benchmarking is the practice of comparing current results with the results from the previous survey. So if you ran a charity event last year and collected a survey, that data would be your benchmark for this year’s survey.
You would then take the key findings from that initial survey and use it as the starting point for your next one. Viola! Without any extra number crunching you can see upfront whether you’ve improved or regressed based on previous results.
The way you use benchmarks depends on the kind of survey you’re running - sometimes it makes sense to run monthly comparisons, while other times you’ll only compare data on a yearly basis.
For example, at Paperform we meet each week to analyze marketing metrics. We compare things like how many visitors our website has had, how many people have read blog posts like this, and how many new customers have signed up.
By doing this we can track what we have done well, where we need to improve, and catch trends as they happen. Maybe one week we have extra visitors thanks to a great blog post, or we get more customers because we spent more on advertisements.
Then we can see how we have performed compared to last week, last month or even last year. The goal is to see what trends are emerging and how responses may have changed over time (the fancy term for this is longitudinal analysis.)
Don’t worry if it’s your first time collecting data. We all start somewhere. Just use your first survey as your baseline. You can use it as your benchmark moving forward and it doubles as a comparison for any filtering or crosstab you do.
You’ll find all sorts of interesting information just by sorting survey responses with filters or crosstab, and comparing it to the overall results you’ve collected. Remember that the more specific your survey data is, the more actionable insights you’ll be able to draw from it.
A time will come when you’ve gotten all you can from quantitative data. Once you’re done with the numbers you’ll want to learn why respondents are answering in a certain way.
Sometimes that ‘why’ will be answered within the survey. Other times it’ll be up to you to get your magnifying glass and deerstalker hat and tuck into the precious qualitative data.
Sorting through written responses and comments helps you find the story behind the numbers. You’re not doing this to stroke your own ego (or punish yourself), your goal is to draw conclusions by combining the data with what people are saying.
To do this it’s important to know the difference between correlation and causation.
Using a heater and wearing tracksuit pants are two variables that are correlated. They go up and down together. One doesn’t cause the other. They’re both caused by cold weather.
But there is causation between tracksuit pants and winter. As the temperature drops in the colder months, the sale of tracksuit pants skyrocket. You can safely say that a factor (cold weather) affects another factor (the sale of tracksuit pants).
When diving into your survey results it’s common to mistake correlation for causation. Just remember that just because two variables move at the same time doesn’t mean they’re affecting each other.
Let’s say that your charity event was a resounding success. Your survey respondents loved just about everything - except the location. You dive into the qualitative data and try to find out why this was the case, and discover that a common complaint was the event wasn’t near any public transport.
You’ve found causation. People were disappointed with the location (one thing) because it was not close to public transport (another thing). There’s your story. Now you can use that information to pick a better location next year.
Ultimately it’s about finding the story your data tells and then using it to improve whatever it is you’re trying to improve. It could be anything from rebranding your business to evaluating your marketing campaign - the ways survey data can benefit your business are only as limited as your imagination.
Take your time to read through these long-form responses. You’ll have all sorts of weird complaints and feedback for things you have no control over. Let that stuff go through to the keeper, and only concentrate on things you can control.
It comes down to listening to what your respondents are telling you. If you don’t, you’ll just be wasting time, effort and money.
All the data in the world doesn’t mean much if you don’t show it to anyone. In order for the survey data you’ve collected to have a real impact you have to tell its story.
Whether you’re presenting to stakeholders or giving your sole business partner a rundown of all you’ve learned, there are a few things you want to achieve:
The last point is crucial. There’s no point getting doing customer research and not sharing it with your product team. Make sure it gets to the right place so changes can be made.
Keep in mind that no one wants to hear the long-winded, step-by-step account of your survey analysis process. No matter who you’re talking to, don’t feel the need to include every single number and calculation you made.
Of course, the way you'll present your results depends on who you’re presenting to. With that in mind, we’ve put together the best ways to present your survey analysis results so people will actually listen (and learn).
Graphs and charts look good and are easy to understand. You can make them pop with colours and patterns, and most people will find them simpler to understand than pages of plain text.
The kind you choose will depend on what type of survey you’ve created and the data you want to display. Venn diagrams work well to show relationships between certain things, but bar charts are better to track changes over time.
No matter what type you choose - and there are literally hundreds of variations - your goal is to convey the information as clearly as possible.
You’re not stuck with one kind of graph or chart either. With Google Sheets or Excel, it’s simple to change graph types, so you can play around and see which is the best option to display your information.
Tables might be a better way to show your numerical data, especially for things like cross-tabulation. Rather than making a graph, or creating a fancy visual presentation, you can just show the data straight from your spreadsheet.
Just make sure you clean up the sheet and get rid of any data that’s not relevant to what you’re trying to share. If you’re showing your crosstab results you don’t need statistical significance, mean, or median. Keep the focus on what’s necessary.
Depending on who you show your data people can freeze up when they see spreadsheets. If you find that’s the case, try using a tool like Canva to create a table that looks a tad more aesthetically pleasing.
Why not use a combination of visuals and text? Especially if you’re presenting your survey results to a group (like in a webinar), it’s a good idea to use a presentation that takes your audience through the data in an engaging way.
It’s a great way to tell the story of your data. You can start from the beginning and work through the survey questions, findings and any recommendations you may have.
Infographics are kind of like a visual presentation on steroids. They help you get the point across quickly and simply with a combination of icons, numbers and text.
If you want to break down complex ideas into simple messages, infographics are the way to go. They’re more effective and approachable than pages of text and are a breeze to share with your employees so they can get an idea of the results.
You can also use infographics as a supplement to another presentation form. For example, you could run a presentation and then send one out to the team as a quick reference guide.
There’s an episode of the iconic sitcom Friends, where Rachel gives Ross an eighteen-page letter about their relationship. It’s the most important letter he has ever received because he loves her, and their relationship hinges upon him reading it.
He doesn’t. Why? Because for the most part, pages of written text are boring. For that reason, you should avoid presenting your survey analysis data in long, written reports as much as possible.
But do you know who loves reports? Stakeholders. So when you’re presenting data to senior executive staff or fancy business clients, you'll want to prepare a written report of your findings.
Like infographics, you can also use them to accompany a visual presentation. You can hand out a written report as a reference during a presentation, or give it to employees to summarize key data.
Just make sure your writing is concise and to the point, so people will actually read it.
Now you know how to analyze your survey results all that’s left is to actually do it. It’s easier said than done, but if you follow this guide you’ll be extracting actionable insights in no time.
Make it easy by using an online form builder like Paperform. You can start from scratch or use one of our more than 600 templates, each pre-filled with sample questions so you can get an idea of where to start.
Customize it with your brand colours, fonts and logos and then share it with your audience. Then with Paperform Analytics you can analyze your response rate and other metrics, or make use of our many integrations to export your data to 3rd party apps and take things to the next level.
Don’t just take our word for it. See for yourself with our 14-day free trial, no credit card required.
Learn the differences between an amendment and an addendum—and when to use each.
Demystify legal terms with our comprehensive glossary—which includes fun examples from The Simpsons ...
Discover how the right HR forms can transform your department. From recruitment to recognition, see ...