A Beginner’s Guide to Unmoderated Testing
September 21, 2022
Estimated reading time: 0 minutes
If you work in the world of product development, you are no doubt familiar with go-to user testing methods, like User Interviews. Yet not all methods are as widely used, even though diversifying your research method can enrich your data sets.
That’s why in this post, we break down the basics of Unmoderated Testing, a unique type of user test that allows you to hear from users without influencing their feedback with your presence. When used in combination with other methods, Unmoderated Testing is a valuable tool to triangulate your data, providing a more well-rounded view of your product.
What is Unmoderated Testing?
Unlike User Interviews, where a researcher is present to lead the session, Unmoderated Testing is exactly what it sounds like: unmoderated. No researcher is present to interview, supervise, or observe the participant, leaving them more or less to their own devices.
Other user testing methods (like Diary Studies, for example) are equally “unmoderated” in this sense but Unmoderated Testing specifically refers to the use of software that has been designed to guide participants through the feedback session.
Unmoderated Testing, finally, is also asynchronous and remote-friendly. The researcher can spend as much time as they need to thoughtfully design the participant experience, and then launch the study and sit back as responses roll in. Participants can run the session on their own time, in their own home, and without live observation, which may have the added benefit of making them feel more comfortable revealing moments of confusion or providing critical reactions—data crucial to your product’s future.
Why Conduct Unmoderated Testing?
Beyond the benefits already outlined (remote-friendly, higher likelihood of candid and thorough feedback), Unmoderated Testing is a great way to capture behavioural data: not what users say, but what they do.
By setting up a series of tasks, researchers can see how participants interact with your product bit by bit, layer by layer. From first impressions to comprehension issues to usability concerns, you can collect data on virtually every facet of your product as well as your product as a whole.
The most traditional application of this method is for Usability Testing. Researchers can observe how participants naturally engage with a product, where they get stuck and what they misunderstand. It’s ideal for evaluating the usability of UX flows by observing if participants are able to complete assigned tasks and how long it takes them.
A more unconventional application of Unmoderated Testing is for early-stage product testing; it can be used to evaluate concepts and assess the desirability of high-fidelity prototypes. By presenting a concept to participants in Unmoderated Testing, you can gather first impressions, assess the idea’s desirability, evaluate its value proposition, observe which features or content draws attention, and so on. Although you won’t be able to dig into a participant’s individual reactions (I recommend pairing this method with Concept Evaluation Interviews to gather deeper data), there is a lot to learn from observing how participants explore a new idea without supervision.
In addition, Unmoderated Testing can be used as part of a competitive review. If you’re curious about a competitor’s product, consider an Unmoderated Test to gather feedback on the product’s pain points, gain creators and identify any feature or content gaps.
Keep in mind that when putting any highly polished product in front of participants, their instincts may lead them to point out small usability issues rather than high-level feedback, especially if they are experienced panel recruits who are used to participating in Usability Tests. If you’re looking to assess desirability, it may require a significant amount of guidance to encourage participants to explore their thoughts at concept-level.
How to Run Unmoderated Testing?
Although it comes with many benefits, Unmoderated Testing can be challenging to implement. It all comes down to a well-designed study—and, of course, practice. Trying out an unmoderated study is the best way to learn how to improve your test design for the next time around.
To begin designing your Unmoderated Test, follow the steps below. Hopefully this guide will make you more confident to give it a try.
1. Define Research Objectives
Like any research, it’s important to start the process by defining your research objectives. What is the focus, purpose and significance of your testing? What are your driving questions and what do you want to accomplish? Being clear about your needs is especially important for Unmoderated Testing because it will not only inform your test design, but also key tactical decisions like tool selection.
2. Select the Appropriate Tool
Selecting the right tool is a unique challenge of Unmoderated Testing, especially if you’re working without the constraints of a predetermined tool. This is because there are many options to choose from, each with their own set features, ways of accessing participants, and pricing models. Pricing has been a significant contributing factor for selecting the tools I’ve used before, but for the sake of keeping this article (somewhat) brief, I’ll need to focus on the other considerations.
To begin the selection process, you’ll want to think through the types of data you need to capture, as well as how you might want to analyze and share this information. Do you need qualitative insights or quantitative measurements of interactions? Do you need in-app analysis features to speed up your synthesis? Do your stakeholders want to join in on synthesis, see clips, and/or have access to the entire data set? Answering these questions will help you know what to look for.
Most importantly, you’ll want to understand who you need to test with and if your selected platform can support recruiting those participants. There are two ways of bringing participants into an Unmoderated Test: you can recruit them using the platform’s ready-to-go participants, or you can self-recruit and onboard participants onto the tool yourself.
When recruiting participants via the platform’s existing pool, it’s important to understand if the target audience you’re looking for will be available to sample from. Some tools, for example, only have participants from the US, and if your target audience is broader than that (or in some cases narrower), they might not be the solution for you. I recommend speaking directly with someone from the organization to inquire if their participants will meet your needs.
Recruiting your own participants comes with its own pros and cons, too. Aside from the additional time it takes, one challenge is the technical and comprehension risk of onboarding participants onto the platform yourself. At the same time, manually introducing each participant onto the tool, if done carefully, might increase your chances of gathering quality responses.
When considering a new tool, I highly recommend requesting a sales demo (with your questions prepared) and running a pilot to see if everything functions as expected.
3. Draft Test Session (with Tasks & Questions)
The entirety of the testing session will be controlled by the instructions, tasks and questions that you write for participants.
When it comes time to draft your test, you first want to set your scope: Participants will naturally start to lose interest after 15 minutes so you’ll want to keep your session well under 30 minutes in length. This is quite shorter than most interviews, so you’ll want to be very intentional with their time.
Start by introducing the objective of the session and communicate the expectations you have for the user (e.g., to speak aloud). Most participants are eager to help so this is your opportunity to help them help you. Because a researcher won’t be there to course correct, it is important to set the participants up for success from the get-go.
When writing tasks, you’ll need to articulate what the user is supposed to try to accomplish while at the same time not hinting at the correct solution. For example, you could ask something like “Where would you go to find support?” but avoid spelling it out like “Find the contact button.” This is a bit of an art in terms of finding the balance between directing the participant and leading them on. Here are some quick tips for writing tasks:
- Define tasks in terms of the user’s “goals” or “objectives”. What is the user need that your UX flow is attempting to solve for?
- Storytelling can help anchor the participant more deeply into the experience. Consider adding a narrative for the participant to roleplay. This can provide context and help them understand why they might be trying to complete this task in the real world. Keep the narrative as simple as possible and focus on what they are trying to accomplish. Any unnecessary details can bias or mislead the participant.
- Tasks should rarely include exact language from the features or content. These are often too strongly suggestive of hints.
- Be clear and concise. Less is more.
Lastly, any instructions that are meant to direct the user around your product – like to bring them to the right starting point for a task – must be very carefully written because participants cannot ask for clarification if they get lost or don’t understand.
You’ll be able to embed questions throughout the experience, as well as a set of reflection questions at the end. Quick survey questions and speak aloud responses are more effective than typed responses because participants will be already in the flow of speaking aloud. You will need to remind participants to explain their responses to survey questions if this is something you’re hoping to hear.
4. Pilot the Test
Conducting a pilot run of your test is critical because there won’t be a way to fix problems while the study is actually running. Piloting is also important to make sure you’re asking the right number of questions in the desired time frame.
I would recommend three types of preliminary tests before launching the full study:
- Test your study with a member of the team who understands the research objectives to make sure your study is optimally designed to collect the intended data.
- Test your study with an individual who is not familiar with the product or research objectives but can give you feedback on the usability, flow, and comprehension of the study design.
- Run a pilot study with at least one real participant before publishing for all responses. This will give you an idea of what it will be like to see the real data come in.
5. Recruit Participants
Once your test is ready for launch, it’s time to recruit your participants. As discussed above, you’ll want to consider your target audience and determine what user criteria (if any) are critical to participation. If recruiting from a tool’s participant pool, they may have limitations on what you can screen for. Be aware of how the screening process works for your tool before writing an in-depth screener. If testing primarily for usability (as opposed to desirability), keep in mind that a target audience can sometimes be less important.
As for the number of participants, an Unmoderated Test can be successful with as few as 5 individuals when the goal is to identify usability issues. For broader desirability concerns, or to better understand the product’s set of value propositions, 10 to 15 participants may be sufficient — unless you are testing with multiple user segments or product concepts. Always add a few extra, where possible, to account for unsuccessful entries, which is more common with this type of testing. At the same time, you don’t want to collect too much more than you need because it can make analysis challenging.
6. Run Study & Analyze Results
With your study ready and screening criteria set, running your study is as easy as clicking “Go!” The study will go live and participants will respond in their own time.
Unlike moderated tests, where data is collected live and processed in real-time, unmoderated tests rely on data being consumed entirely after the fact. Comparatively, this post-study analysis can feel more time-consuming, resulting in a greater cognitive toll on the researcher.
In each recording, you’ll want to listen for the problems each participant experienced, the questions they asked aloud, and all the positive and negative reactions they respond with. If you collected quantitative data (success rate, task time, ratings, etc.), you’ll also need to extract that data for further analysis.
Congratulations! If you made it this far, I’m confident that you have what you need to get started with Unmoderated Testing. Don’t be discouraged if your first attempt leaves you feeling like there is room for improvement. There always is! I hope that by adding a new method to your research toolkit, you can enrich your learnings and help your team build better products.
Subscribe to Our Newsletter
Join the Connected newsletter list to receive curated content that exemplifies our Product thinking approach.
Fri Sep 2
Connector Spotlight: Sandy Chahine
A passionate wanderer that firmly believes in the mantra of “trust the process,” Sandy Chahine, Design Researcher, doesn’t only work at the intersection of research and strategy at Connected. She is always eager to facilitate a conversation around important topics whether that be as one of the leaders of the Connected Women’s Network (CWN) or as a member of the Diversity, Equity, and Inclusion (DE&I) committee. Described as curious, passionate, and reflective by her colleagues, we know she’s always going a step further to learn, but also teach individuals around her. As someone who is into all things urbanism and public space, can you guess what her favourite product is?
Fri Aug 19
6 Books with Underappreciated Perspectives for Innovators
Looking to refresh your practice or explore a new perspective in your approach as a product manager, UX researcher, or product designer? Connected’s Senior Design Researcher, Dr. Tyler Hale, shares recommendations of under-appreciated books with implications for new product development and innovation.