Why Surveys and Questionnaires?
As UX researchers and designers, there’s a number of reasons why we might employ surveys and questionnaires (I will default to surveys throughout this post, but know that in many cases this discussion of surveys is interchangeable with questionnaires aside from the implications of calling it one or the other) to our audiences. These include, but aren’t limited to gaining a better understanding of how and why audiences might use our end product, current knowledge of product or similar products, how users feel about specific design elements, why users made certain decisions on how to use a product, what sorts of audiences are using the product or might use the product. Regardless of the questions asked, the end result is ideally an abundance of both qualitative and quantitative data that can inform next steps in development, dissemination, or approaches to similar projects in the future. Therein lies the purpose – these are yet another set of tools to add to the arsenal of UX researchers and designers that can be used to better understand how users interact with what we’ve created. While these are valuable tools, they come with their own sets of considerations, shortcomings, and idiosyncrasies that we will have to be aware of to use them effectively.
Creating Surveys
The methods of creating surveys is remarkably similar to how one might fashion interview questions or research questions in an academic setting. Depending on what you’re seeking to uncover through your survey, qualitative or quantitative, simple or complex responses, unique or prewritten responses, etc. your questions will be vastly different. For example, if you’re looking to aggregate quantitative data you might focus on questions related to user demographics stemming from prewritten responses, or if you want to demonstrate differences in your participants you might leave each question open ended with a place to input their own responses, or you might even wish to check if participants are paying attention to a longer survey by having questions that could potentially contradict each other, in each of these circumstances your questions will undoubtedly shift and change based on your intentions as the creator of the survey. Once you’ve figured out exactly what you’re looking for with your survey, you can then move on to drafting questions and choosing a platform/medium.
Your questions are the backbone of your surveys, and as such they need to be carefully considered throughout each step of the survey development process. As you draft your questions, you might consider creating them under the pretenses of specific aspects of design or testing that will garner more specific, actionable responses based on the users’ experiences. One way of doing this is to set up range-based responses that aim to see how much a user agrees or disagrees with a set of statements, ask them about whether they had a positive or negative experience with a specific design decision that the design team is concerned with, provide them with text boxes were they can offer additional insights into their choices, etc. These questions help to establish quantitative information (how many users agreed with a statement vs disagreed with it), as well as qualitative (I agree because…). However, in setting up these questions, there are a number of considerations that must be made to ensure that you’re not damaging your data set while adding to it.
Regardless of the data you’re looking to collect, your goal as the researcher or designer will be to ask questions that are free of potential biases, preconceived notions of your product, unclear directions, lead users to specific answers, and superfluous words. By focusing on removing these characteristics from your potential questions, you should be able to maintain untainted data from your surveys. It’s for this reason that it may be fruitful to spend time testing your survey on others working on the same project, or to go through multiple passes of revision to verify that your questions are free of potential issues, to identify users’ prior experiences with your product or similar products, to prune your questions until they are as concise as possible, and to offer questions that your participants will/can only give viable responses to. As much as you can reduce these potential issues, the more valuable your data will be in making future decisions.
Pre/Post-test Surveys?
Once you’ve planned out your questions, you must consider whether or not your study will benefit from pre-testing and post-testing, or just one over the other. They both have their benefits and constraints, but the most pressing concern is in whether or not participants will become fatigued and stop answering. For this reason, if you plan on using both, the pre-test survey should only focus on the information that you absolutely need before starting your user testing. You might collect information regarding users’ perceptions of their ability to use the product you’re about to test with them, demographic data which can potentially help inform the testing process, and information about why they might use the product, however all of this information pales in comparison to what you can gain from post-test surveys. With post-test surveys, you have the opportunity, as I’ve said a bit before, to focus your questions on specific design elements of the product that can then be used to gain deeper understandings of what needs to be changed or left alone. Finally, though, after all of these considerations you’re ready to disseminate your survey and collect your data.
Understanding and Communicating Survey Data
After collecting responses, you have to communicate the newfound data to your stakeholders in a meaningful way, often through the use of tables, graphs, and quotes. However, there’s instances where the data collected itself is subject to additional scrutiny to better understand whether or not it’s valuable. It’s tempting to say, almost immediately, that once a survey is complete that you now have valuable data, however there are ways of checking this against the tests that you’ve completed as well. What you will want to pay attention to most is correlation and causation relationships between data points, questions that have massively skewed responses without an associated causal relationship (may indicate that a question is bad), an overabundance of positive or negative responses with no outlier and if there is an outlier that they be interviewed to understand their perspective, be wary of automated graphs that may make differences in data points seem larger or smaller than they actually are, and always double check your math to verify that any statistics are properly calculated. If any of these issues with your data show up, it’s worth taking the time to survey a second set of users to corroborate or negate the original survey’s responses, though you should limit changes between surveys and testing periods to individual variables if possible to isolate where the issue may have arisen.
Once you’re done testing, verifying your data, and corroborating your data, then you can start thinking about how best to communicate it. Most importantly, you need to consider how your data will look in different formats. A line graph that displays a positive or negative trajectory works well in a graph because the data supports that trend – the same somewhat works for stagnating data that indicates no changes over time, a comparative bar graph works well to show differences in raw sums, and a pie graph works well to display percentages, but if you have a lot of qualitative data and instead want to quantify recurring words and phrases you might create a table using data aggregated from a program like NVivo. Needless to say, just as there’s a multitude of ways to create your surveys, there’s also a multitude of ways to take the ensuing data and communicate it to your stakeholders. Regardless of the method you choose to communicate it, you need to consider the benefits and constraints of each possible method and ensure clarity above all else.