Remote usability testing can help you to gather customer insights when the timeframe is tight and test participants are geographically dispersed. Running tests unmoderated, without a test facilitator to guide the participant through the test, can work well for short, properly planned tests. While there are some significant drawbacks with this type of usability testing, unmoderated testing lets you conduct test sessions with hundreds of people simultaneously, in their own environment.
In this article, I explore some of the tools and methodology for running unmoderated usability tests remotely. For a summary of the gains and trade-offs of different types of usability testing methodology, check out this earlier post in our blog.
Many tools already available, with more to come
For in-person user research just pen and paper may be sufficient tools for capturing observations, as the test facilitator and observer(s) are there in person to monitor the test. For remote tests, more technology is required to capture an adequate portion of the participants’ actions. Moderated remote tests can rely on online-meeting software such as GoToMeeting or Adobe Connect to facilitate the session, but unmoderated user testing needs an application that guides the participants through the session and records what happens. Design researcher Nate Bolt has created a rather comprehensive list of the many tools specifically created for remote testing with details on pricing, setup and features. At Usability Matters, we use a combination of tools for both moderated and unmoderated testing. A good automated usability testing tool offers recruiting flexibility, the ability to write your own tasks, and a high quality video and audio recording of the session.
When choosing a remote testing tool, consider the following:
Does the tool let you record the participant’s screen and audio, both of which are must-haves? You may also like having the ability to capture the participant’s facial expressions from webcam. What are the options for recruiting? Will it be a participant panel, or can you select your own pool of users? Does the tool let you freely create your own set of tasks and questions for the test? Besides live Web content, can you test other elements such as mobile apps, wireframes and prototypes? How quickly can you have access to the results? Many remote testing tools will let you view the recordings almost instantly after the test. Pricing plans vary. When considering the total cost of running the tests, factor in the time and resources it will take to analyze the test results.
Define specific objectives for the test
Before you run the test, it’s important to fully understand why the research is being conducted. This will help with recruitment, designing the questions and tasks for the study, and provide a reference point for analysis and subsequent discussions later.
Make sure the objectives are specific. Simply stating “we want to see if this concept/product is easy to use” is too general. To formulate the test objectives, think about any aspects of the test subject that are of particular concern, tasks that might be difficult, groups and types of users you are worried about, as well as concerns the stakeholders, including business, designers, and developers have.
Write the test plan carefully
Because you won’t be present to help explain the tasks to the participants, you need to make sure the scenarios are clear and specific. Once you have a draft of your test plan written, the best way to check for ambiguous tasks is to pilot test the instructions with a group of people internally, and ask them to repeat back their understanding of the task. Then make edits accordingly, and pilot test the plan again if necessary before you start running the actual tests.
Make sure you select tasks that give specific areas of the site a good assessment. Consider breaking any larger tasks into smaller ones to allow participants to focus on each step of the process and provide specific feedback about each part of the larger experience.
Avoid adding extraneous information in a task, which may confuse the test participants. Also avoid clues and telling the participants what to do. Consider building some redundancy in your questions since you won’t be able to ask specific follow-up questions during the session like in a moderated testing situation. You can do this by asking the participants probing questions after they perform specific interactions with the user interface. This way you get both a recording of what they do on the screen as well as verbal answers to specific questions right after.
Many of the tools let you choose participants from their pool of candidates, while some will let you add participants from your own list. The upside of using a provided participant panel is that it’s fast and easy. The people are mostly familiar with the study tool and can participate in your test as soon as they have time. Make sure the tool lets you specify basic demographics as well as additional qualifiers such as “must have experience with website/product X” or “must use online banking once a month.”
The downside of using panel participants is that they may do these studies so frequently they’ve turned into professional testers who focus on looking for things to critique. The popular tool Usertesting.com, for example, has attracted a large pool of people looking to make some extra cash. Consider recruiting a few extra participants to compensate for any “career participants” and take note of testers who didn’t seem honestly engaged with the tasks.
Make sure you run a proper analysis in the end
The essence of usability testing is to gather behavioural insights by observing people interact with the product. That’s why it’s so important to choose a tool that records the screen, audio, taps, and even the participant’s facial expressions if possible. Remote, unmoderated studies can save you some time when running the tests, but in the end you will still need to watch through the recordings, spend time analyzing the results, and write a report to present the findings. If you choose to recruit a large number of users for your test because it’s lighter to run, you may need a few more resources to work through the findings.
Here are some more things to consider:
What unmoderated remote testing does, and does not work for
Assessing targeted areas and well-known issues, A-B-type testing, simple competitive analyses, and initial problem discovery all make good candidates for unmoderated testing. Keep the tests short. It’s possible for participants to think they’ve successfully completed a task when they haven’t, and there is no test moderator to help them. For this reason, develop straightforward tasks that have well-defined end states. Unmoderated testing doesn’t let you conduct interview-based tasks. If possible, follow up with your participants after the study to discuss their feedback.
No one testing tool may provide a be-all, end-all solution for your user research needs
While some of the more robust online tools which started as unmoderated testing platforms now offer moderated testing and other features, you will likely find yourself using a combination of tools, providers and solutions to meet your research needs.
Unmoderated testing may not be a time-saver in the end
It will likely not provide all the answers you are looking for, but serves as one useful method in the toolkit.