Ask Us Anything: UX research

Terry Costantino and Steven LeMay of Usability Matters answered your UX research questions in a 30 minute live session.

Full transcript

Anita Sedgwick: Okay, excellent. All right, welcome everybody. Sorry we had a few little technical glitches getting started today. Welcome to Ask Us Anything on UX research, welcome to this live chat with Terry Costantino and Steven LeMay.

Steven LeMay: Hello.

Terry Costantino: Hi.

Anita: My name’s Anita Sedgwick, I run marketing here at Usability Matters. I will be hosting our Ask Us Anything session. We’re thrilled to host this first in a series of Ask Us Anything, live webinar chats that are essentially designed to address the many questions that we get from the design community. Today’s session will be specific to UX research. So just before we get into the many questions I wanna cover up a few administrative things. First off, this is being recorded so those of you out there that wanna come back and reference any of the questions that we’ve answered or share this with your friends or colleagues, please feel free to lock or jump into our website and capture the recording. We will also be monitoring Twitter throughout this session for any additional incoming questions.
If you use the hashtag #AskUsAnything or @mention Umatters, which is short for Usability Matters, we will be sure to pick it up and ask those questions and put them on the board. That said, we want to keep this session tight and get as many questions in as possible. If we don’t answer them all within the 20 minutes that we’ve allocated, then we’ll either host another session or do a follow up blog post, so pleased stay tuned. So without further ado, I’m going to start off with our first question. Steven this feels like it might be ready for you. First question actually is how to conduct research if you have completely no access to users. Which information would you be interested to collect?

Steven: So that’s a quandary that we often have in projects. Sometimes the access to the real users is a protected area. So sometimes we have to just find proxies for the real users. Sometimes that’s general public is better than no public and certainly some testing, any testing is better than no testing. But some of the proxies that we might look for in a particular project. Sometimes the sales people for example, at an organization, have lots of really good contact with customers so they might be a source of good proxy information. And another project recently where we weren’t able to consult with customers in person, we were able, however, to listen in on phone calls into the customer service helpline. So while we weren’t actually able to inject-in questions, we got a chance to listen remotely and then ask questions of customer service people about the kinds of things that they see on a regular basis. So the reality is we don’t always have access to users, and sometimes we have to find the best proxies.

Terry: And I think I would add to that that, I think as Steven indicated, but I’ll make it a little more explicit, is you might change the method that you would use. So we wouldn’t do necessarily, say for example, a usability task with proxies because that might skew you to think that something isn’t gonna work for your audience when you haven’t actually tested it with that audience. So using proxies as a source of information, so maybe as Steven mentioned a different kind of research like call listening would be great if you can’t get to end-users specifically but you would probably change the method.

Steven: Yeah, fair enough I hope that answers the question Anita.

Anita: I think so. This next question is for a B2C product. Is it more effective to go with an informal guerilla approach or a formal approach? And again this is specifically to B2C products.

Steven: A more formal versus a less formal because it’s business to customer?

Anita: Mm-hmm.

Steven: I’m not sure that business to customer really impacts that. It’s really a matter of what is the research that you’re trying to… What are your objectives and what are the methods that you’re engaging in. So if you’re trying to find out information about who your audience is, you probably don’t want to go to just the general public, you want it to be a bit more targeted. So if you’re trying to address the needs of newborn moms or new immigrants or…

Anita: Newborn moms?

[laughter]

Steven: Newborn, yes exactly. Moms with newborns.

[laughter]

Steven: You’ve gotta be a little bit careful. But if your research is not about finding out more about who your audience is, but is more about general usability issues, then you can be a little bit more flexible if need be. Again, testing with an audience is better than testing with no audience.

Anita: Absolutely, it’s a good point. Okay, here’s another question Steven. I think it goes back to that whole recruitment piece. Do you consider users who are not very familiar with using websites in general or apps, and computers or products in general…

Steven: Where they’re conducting their research?

Anita: Yeah. How do you address that?

Steven: Well typically we will, if we’re recruiting for participants, We’ll recruit neophytes out of the the recruit intentionally, because the sessions tend to be really short. We don’t wants to dedicate time teaching people how to use a web browser, so most often we will choose not to include people who are that novice, but sometimes people who are that novice are indeed our specific audience of interest. So that’s reverse; we’ll select specifically for them, and let people who are more familiar or experts.

Terry: Yeah exactly. For example, we did one project with Elfa Plaza Literacy Center, so their website was obviously created for people who had low literacy and we absolutely targeted them, and the two things in that case went hand in hand. They had low literacy and consequently low experience with computers, and we had to make it easier for them. Often one of the first questions we’ll ask. Whether they are the target audience includes people who are less familiar and further the lines show our clients that’s not the case.

Anita: Okay. All right Another question this one is about working on re designing software. Is it useful to test users on the current software when the new design will be so different, or would it be better to simply survey or interview users for more opinion based data?

[background conversation]

Steven: I’m not quite sure what the asker of this question is trying to get out of the value, whether value is what we’re looking for, but I can talk about one particular case when it comes to mind where we did intentionally test the current product before going into redesign, and that was really around trying to get a sense of two things. What are the biggest obstacles to usability in the current system that we want to smooth out in any new redesign? What are the things that work really well that we want to make sure we don’t break? So, in one particular case, those questions were essential to answer, and usability testing is the best way to answer them. In most cases, however, we’re going to focus on new designs to make sure that the assessment is really focused on those.

Anita: Okay. We have a great question from Emily on twitter. She’s wondering about diary studies. Specifically, maybe Terry this one you can help us out with.

Terry: Sure.

Anita: Can you give a brief description to diary studies? When is the best time to use this method? Can you give a real life example of diary study process, and maybe some tools as well? And some cautionary points around that.

Terry: Yes. I’ll do what I can, but as it turns out, this really matters, as we rarely have the opportunity to do diary studies. So I don’t want to give advice that I don’t actually have, but just to those who are not familiar with… Diary study would be a case where you ask some selected users, just like any other method you would figure out what the objectives are, and then you would figure out who you want to do their research with, and then they would be asked over a period of time, could be a very short period of time, could be a longer period of time, to make notes in whatever format is…

Terry: Usually whatever format’s most comfortable to the user, so giving them more than one option often, and then they would note things down on a regular basis about whatever behavior. So diary studies are, I would say, my understanding of them is that they’re very explanatory in terms of getting a deep understanding of user behavior around something, and that’s one of the reasons that we haven’t done a lot of it. It’s that they tend to be very early research. Often around innovating a product in a really major way, because you want to know about different behavior, and how your product could fit into that behavior. So we have done interviews even long term interviews, and multiple interviews with people over time, but we haven’t left it so much to them to do the diaries.

Steven: It’s funny that this is one of those techniques that’s been around with market research for decades, but it’s come back now as people are stepping a little further back from just the usability of products, and focusing the broader service design. And that broader service design often requires a longer view. It just needs…

Terry: Yeah that’s a good point.

Steven: More data that has to be recorded, and more data that has to be assessed, so you want to figure that one out.

Terry: Yeah I imagine that we’ll do more as more of our clients come on board with the notion of looking at the broader picture.

Anita: Let’s keep moving. We’ve got a lot coming in here. Here’s a really neat one from Denise in our chat here. Aside from a large scale UX projects, any recommendations on how to address day to day UX issues?

Steven: Day-to-day UX issues, but my toaster is really hard to use. We all have…

[overlapping conversation]

[chuckle]

Anita: She’s looking potentially for some tools or processes to that.

Steven: So, some of the tools that we would use on a regular basis, whiteboards and sticky notes. You can see behind Terry and me are, these are the key tools that we’re using all the time. We’re scribbling and writing and scratching and taking notes all the time on low-fidelity sources such as these. Tools and techniques for actual research…

Anita: Maybe a couple of guerilla tests or with users or…

Steven: Let me talk a little bit, I know there’s a lot of interest in remote testing and testing with mobile. So maybe if I focus on those two with tools that might help. So, with remote usability testing, it’s kind of like this webinar, where we’ve got people who are geographically dispersed who need to seem like they’re in the same space, so web conference software is key to that. And it’s funny, we just recently did a renewed survey of some of the web conference software options that are out there, with the aim to find, what’s the easiest one for people who are not in our inner circle to use. So people who are recruited to be in a study, which ones can they most easily use? And, all of them work really well but there’s different levels of start-up software. And our conclusion, to our surprise, that at least again for the next six months is Adobe Connect is our preferred option for remote studies. There’s lots of options that go along, within reasons for it. But the small start-up cost for participants is really key among those.

Terry: Yeah, and also the fact that the participants only have to get a temporary plug-in, they don’t have to download software, which many of the other tools require. So thinking about how to make it as easy as possible for the participants is an important part of the consideration and research for sure.

Steven: On the mobile testing end of things, there’s a couple of tools that we use all the time. So, one that we found indispensable has been a product called, “Reflector,” and that allows us to show what’s on a participant’s screen, their mobile device screen, on the screens of our Macs. The big limitation there that we’re talking iOS products, iPads, iPhones on to Macs alone, it’s a limitation of scope there. It works well but it doesn’t allow us to test with Android devices or on PCs. But we were just looking at another product which is currently in beta, it looks really promising. So I’ve got my eyes on the product, an app called “Look Back”, which is iOS and Android and desktop. So it looks really promising, the tools are getting a lot better for mobile testing.

Terry: And Reflector has Reflector 2 out which also allows you to reflect Android devices, but honestly we found it a little bit unstable. So we’re looking also forward to the next release of Reflector 2 that’s broader, just because personally we’ve had to step back to the original version, because it’s just more stable in a test environment.

Steven: We’ve also on a number of studies, used a web cam that’s on a stand or suspended from the ceiling so you got a top-down view of what somebody’s holding in their hands and how they’re manipulating it. Are they turning it this way? Do they roll it this way? Do they lay it flat? It gives you a good sense of just the physical part of interaction with the whole device.

Terry: Yeah. Easier to see with a top.

Anita: Okay. Can you guys talk a little bit about unmoderated usability testing tools? There’s a couple questions that have come up around, or through the pipeline as far as different types of tools. One about unmoderated, which type of data do you collect from there? Do you set usability testing goals and success criteria to set up a testing with this tool?

Terry: Yeah, so again it falls to me to tell you that we haven’t done very much unmoderated testing. But, we are actually planning one at the moment, and one of the reasons that we don’t do it is that we do find that we get really rich data from a moderated test. We find that whether it’s a remote test, as Dean was discussing, or face-to-face test, there’s a lot we can get by asking people very specific questions in the moment. So, while there’s a guide, it allows you to be really present with the participant.

Unmoderated testing, as we’re finding, the challenge of it is writing questions that you don’t get to have the participants interpret. So you have to have a really strong level of clarity. We are planning to use a tool called “Loop 11”, and it is made by user experience design shop much like Usability Matters although out of Melbourne. So we have high hopes for it, but we also have… I think a clear understanding of what’s on these invitations will be. The reason that we decided to use it this time around is that we have already done usability testing in many rounds of accessibility review, but we are going to be using it to review… To get a broader group of people to review a website for accessibility and usability issues from the comfort of their own home, at their own timing and allowing them to use their own adaptive technologies. And it really is very task based research, so it seemed fair to give the unmoderating a try. So I’ll just quickly show you what we are expecting to get.

Now as I said, we haven’t used the product yet, so what I’m showing you is the sample project that they gave us. So this is what the data looks like, so you’ve got tabs, dashboard, tasks, questions, participants and filtering. And so the two types of things that you can specify here are tasks and questions. And the dashboard rolls it up. So it gives an average task completion, so you have to define what it means to be complete. And here quickly you can see that there’s been a task about finding the Dow Jones close, the ability to return to the homepage, so that would be something you might wanna test in terms of navigation. The price of an iPad and a holiday deal. But if we dig a little deeper into the tasks…

Anita: Is this something that would be live while the people are… Or is this something that would come to you at the very end of the…

Terry: Yes I think you get an ongoing report of it, but you want to specify either how long your testing is, or how many people you expect to go through. And we’re planning to specify the number of people. And so when we go to tasks, you get a little bit more detail. So it’s task by task here, so this was the question. Dow Jones closed. Using the website below, where would you go to find exact amount of the Dow Jones close back yesterday. So you need a really clear question, and then you get the people that answer this question. And if they put the wrong close in that they were not successful, and then in some cases people might abandon. So that’s the 2% there.

Anita: That’s very helpful.

Terry: Yeah it can be, but you have to be very careful about moderator testing because there’s very little chance to adjust. You’ve got those questions and you go to them… And so besides specifying tasks, you can also ask just more survey questions. So overall, how easy or difficult did you find finding information, blah blah blah blah. And then you can, much like, if anyone’s used SurveyMonkey which we use quite a bit when we’re doing a survey, you can see the data by question or you can also see it by participant. So this is what each participant did, but you can imagine once you’ve got 50 or 60 participants, that’s gonna be not so easy to do. And then filtering, we haven’t really played with yet. But you can, as it says here, narrow the data down to exclude certain kinds of folks which would help a lot with more detail analysis. So that’s what we’re planning to use, but as I say I’m afraid we don’t have a ton of advice to give yet. But stay tuned after we’ve used it we’ll come back maybe and give you some more insight.
Anita: Love it.

Steven: Terry and I did a couple of webinars earlier in the year on research methods. And it was a discussion about unmoderated testing that we covered in there. And one of the questions that came up around that was about the numbers that… We can see right there we’ve got a 48% average task completion rate. So one of the caveats when we’re looking particularly at smaller data sizes is not to put an overemphasis on those numbers. Because the statistical validity of them can be brought into question. They’re really good gauges, thumbs in the air, sense of things. But don’t focus too much on the specific numbers, look more at the trends.

Terry: Well, the difference here is that it doesn’t tell us why. So much like any other statistical, and I think we’re gonna talk a little bit about B testing so maybe I’ll slide into that. But AB is in the same ballpark. Maybe we don’t care why, if page A is doing better than page B, maybe that’s all we care about. But if you do care about why, if we wanna be able to improve, AB doesn’t tell us that either. It just tells us that more people went and had success, whatever success is. Bought an item or I downloaded a white paper or whatever the success criteria is, it tells us what happened but it does not tell us why. And that’s why we favor unmoderated testing more. Is that we can sit there and discuss with, and also observe, why someone was not unsuccessful.

Anita: So the insights are much deeper.

Terry: Yeah, for us it provides much more design direction than these would give us.

Steven: Right. Interesting.

Anita: Okay, we’re slowly running out of time and we still have a lot of questions. There is one question that keeps popping up, maybe Terry, you can help out with this one a little bit. The first one is: What are some good techniques to convince management of the importance of user research? And then maybe you can sorta slide into some advice on how to convince them that they should begin user research.

Terry: Yeah, it’s a tough one what we faced through our business ending before we started it. But for for-profit organizations, what I have found successful with some bounce is to make the relationship to market research. So, even though user research is different and more behavioral than opinion based, whereas market research tends to be more opinion based, I do find that there’s more tradition of making budgets available in the marketing group, and so I’ve turned to someone in an organization who’s like, “Nah, I think we’re gonna have to skip the user testing.” I’m like, “But you wouldn’t let anything else leave your organization without a testing.”

Anita: Interesting.

Terry: Like, there’s not a cereal box on the aisle who’s image has not been tested with the end user. So, it’s not that that’s so strongly what I believe the value of it is. What I’m trying to do is explain what in words that they can understand what the value would be to them. And it is risky to put something out there that has not been vetted with users, be it a concept which we had some questions about that versus an actual product that’s already created. So, other ways to convince, people who have come up numbers, we try to say what the risk would be of not testing, the other thing is that it would be a shame to put together an entire support mechanism for a product without knowing that was the right product.

Anita: Can you imagine going live with something you’ve invested all this time and money?

Terry: Well, it happens, you hear it all the time, though.

Anita: Yeah, which is really a shame.

Terry: Yeah.

Steven: So the product is going to be tested, whether it happens under your control or after you’ve put it out into the wild.

Anita: That’s right.

Terry: And out in the wild has far more risks and implications to budget and to the success.

Anita: Yep.

Terry: And then, some of the, being of the newer kind of lean, it’s minimal viable product, put it out there, let users test it, if you’re prepared for that. And you do see more of the big players saying, “Hey, this is a beta, tell us what you think.” So testing in the wild’s okay, too, but let’s let them make that conscious decision. So, you make a great point, Steve.

Anita: Yep, it’s a great point. And you know what’s interesting? Just watching in the background as you folks do the user research it’s really interesting to see senior leadership watch actual users struggle with some of the products.

Terry: Oh, if you can get into the observation room, or into a remote observing, but we do find we have our best shot when we get them to the back room and we make sure they have a nice lunch and all those things. But it’s humbling to watch your product get tested and so…

Anita: And they appreciate it.

Terry: They do. And it’s not hard to convince them the second time.

Anita: Yes, it’s true. Yeah, I really enjoyed watching that. Okay, unfortunately, we’ve actually gone over time. There are still a handful of questions that didn’t get answered. We will try to address them either in the blog post that accompanies the recording here today, so watch for that, or we might actually do a follow-up ’cause again, there were a number of questions. So, thank you again to Terry and Steven, and thank you to Louise McCulloch, our marketing coordinator who’s been fiercely working in the background to make sure all this came together.

Steven: Thanks Louise.

Louise: You’re welcome!

Anita: And finally, a big thanks to the fantastic audience out there. We love all your questions and continued curiosity on all things UX. Be sure to stay tuned to our social media channels for any updates and upcoming Ask Us Anything, the live webinar chats. We’ll see you next time.

Steven: Bye all.

Error: Please enter a valid email address

Error: Invalid email

Error: Please enter your first name

Error: Please enter your last name

Error: Please enter a username

Error: Please enter a password

Error: Please confirm your password

Error: Password and password confirmation do not match