Remote Research Blog

Real Users, Real Time, Real Research

Posts written by Nate Bolt & Tony Tulathimutte

  • Remote Research is now on sale!

    Posted on

    Yes, it’s finally here!

    You can now pick up a copy of Nate and Tony’s Remote Research in a variety of flavors: one package (US$36) includes a lovely four color paperback and a screen-optimized DRM-free PDF; the other (US$22) is a pair of DRM-free PDFs (one screen-optimized, the other for you to print yourself).

    Let the remote goodness begin!

    User Research Friday 2010

    Posted on

    User Research Friday 2010 is coming February 19!

    URF is a casual half-day conference brought to you by the gentle folks at Bolt | Peters. We bring user research experts together for advanced discussion, beverages, relaxed learning, and heavy socializing. Past attendees have included design, UX, and research superstars from organizations including IBM, Google, Nokia, HP, Stanford, Genentech, Intuit, Schwab, Williams-Sonoma, Cooper, Adaptive Path, Frog Design, SAP, Intel, and hundreds more.

    The speaker line-up includes:

    Rob Aseron – Director of User Research, Zynga
    Ed Langstroth – Volkswagen of America Electronic Research Lab
    Michal Migurski – Stamen
    Nate Bolt – CEO, Bolt | Peters
    Brynn Evans – UX Consultant & Digital Anthropologist

    Sign up ASAP–only 15 spots left as of Feb 8th!

    Time-Aware Research

    Posted on

    The soul of remote research is that it lets you conduct what we call Time-Aware Research.

    By now UX researchers are familiar with the importance of understanding the usage context of an interface–the physical environment where people are normally using an interface. Remote research opens the door to conducting research that also happens at the moment in people’s real lives when they’re performing a task of interest. This is possible because of live recruiting (the subject of Chapter 3), a method which that allows you to instantly recruit people who are right in the middle of performing the task you’re interested in, using anything from the Web to text messages. Time-awareness in research makes all the difference in user motivation: it means that users are personally invested in what they’re doing, because they’re doing it for their own reasons, not because you’re directing them to; they would have done it whether or not they were in your study.

    Consider the difference between these two scenarios:

    1. You’ve been recruited for some sort of computer study. The moderator shows you this online map Web app you’ve never heard of, and asks you to use it to find some random place you’ve never heard of. It’s This task is a little tricky, but since you’re sitting in this quiet lab and focusing–and they’re not going to let you can’t collect your incentive check and leave until you finish–you figure it out eventually. Not so bad.

    2. You’ve been planning a family vacation for months, but you’ve been busy at work so you procrastinated a bit on the planning, and now it’s the morning of the trip and you’re trying to quickly print out directions between finishing your packing and getting your kids packed. Your coworker told you about this MapTool website you’ve never used before, so you decide to give it a shot, and it’s not so bad; that is, until you get stuck because you can’t find the freaking button to print out the directions, and you’re supposed to leave in an hour, but you can’t until you print these damn directions, but your kids are jumping up and down on their suitcases and asking you where everything is. Why can’t they just make this stupid crap easy to use? Isn’t it OBVIOUS what’s wrong with it? Haven’t they ever seen a REAL PERSON use it before???

    Circumstances matter a lot in user research, and someone who’s using an interface in real life, for real purposes, is going to behave a lot differently–and give more accurate feedback–than someone who’s just being told to accomplish some little task to be able to collect an incentive check. Time-awareness is an important concept, so we’ll bring it up again throughout our book to demonstrate how the concept relates to different aspects of the remote research process (recruiting, moderating, and so on).

    (We understand that as a commercial entity, there is no legal premise of fair use for this image, so we’re clearly violating all kinds of copyrights by using it…)

    Remember that diagram in Back to The Future II? Doc argues that messing with time has sent the world crashing hopelessly toward an alternate reality where things are horrible: the “Wrong 1985.” And that’s sort of what happens when you try to assign people a hypothetical task to do, at a time when they may or may not actually want to do it: you’re meddling with their time, and it’ll create results that look like the real thing but are all wrong.

    When you schedule participants in advance and then ask them to pretend to care, you’re sending your research into the Wrong 1985. If you don’t want to create a time paradox–thereby ending the universe–you should do time-aware research.

    Screening Out Liars From Your Usability Study

    Posted on

    A new article on 90 Percent of Everything discusses a few ways to screen out potential “fake users” who lie about their qualifications to participate in your study:

    In fact, a lot of liars can be screened out by writing a really good screener questionnaire. For example, here’s a decoy question that the Mozilla metrics team used in their recent Test Pilot survey.

    This is really handy to know when live recruiting users from your website–the risk for fakers is even higher when the recruiting pool consists of anyone who comes to your website. Chapter 3 of our Remote Research book also touches on this subject, specifically as it relates to live recruiting. Here are two more pointers:

    –Occasionally when people catch wind of a paid survey offer, they like to post it on “bargain hunting sites” like FatWallet. If you get a sudden surge of recruits, that may be the reason; check the referrer data in your traffic log or analytics to confirm where users are coming from.

    –Use open-ended questions to test people’s motives for coming to the site. If someone responds to the question “Why did you come to the site today?” with a vague answer like “To check the offerings” or “Just looking around”, consider that a yellow flag, and follow up with more specific interview questions.

    Sample Recruiting Forms

    Posted on

    There are lots of ways to user your website to recruit participants for remote research study. (If you’re already confused, see this post for an introduction to live recruiting.)

    First and foremost, there’s our web app Ethnio, which our company Bolt | Peters built specifically for the purpose. It uses a DHTML layer to display a pop-up recruiting form right on top of your webpage, so it’s unlikely that visitors will miss it. (Here’s an example.) All you need to do to install it is place a single line of Javascript in the code of your website.

    Another option is to embed a form somewhere on your website that users can fill out to opt-in to your study. Here are a few examples of form tools you can use: the first one uses the Forms functionality in Google Docs, and the second is a standard Wufoo form.

    Google Docs form:

    The responses for a Google Docs form are loaded into a Google Spreadsheet, like this. You can also link to page that contains only the form.

    Wufoo form:

    Wufoo forms are basically the same, loading its responses into an online table (login required). As with Google forms, you can link to page that contains only the form.

    Escape The Lab

    Posted on

    Want to learn remote research? Bolt | Peters is hosting a one-day workshop on August 26th, and you’re invited. Give us a day and we can teach you all the rocket surgery you need to conduct qualitative studies the real-time, native environment way.

    Date: Wednesday, August 26th, 2009
    Time: 9am – 4:30pm. Sign-in starts at 8:30am, drinks and schmoozing afterwards
    Place: Bolt | Peters User Experience at 60 Rausch St., unit 102, San Francisco, CA
    More Info:
    Cost: $399. Register now (space very limited). 1/2 off for students and underemployed.
    By: Bolt | Peters User Experience, the makers of Ethnio

    Bolt | Peters Instructors

    Cyd Harrell, Director of Research
    Frances James, Lead UX Researcher
    Nate Bolt, CEO

    Who Should Attend?

    Researchers, designers, and product managers who want to watch real people use technology from the comfort of their own desks. (While saving travel costs and the planet!)

    What We’ll Cover

    Strengths and weaknesses of remote ux research
    Study design & scripting
    Participant recruiting options
    Moderating in the remote environment
    Tools for screen sharing, recording, and communication
    What can go wrong and what to do about it

    What You’ll Take Home

    A Trapper Keeper full of script outlines, consent forms, and software comparisons
    A starter account for Ethnio online recruiting
    A coupon for 20% off our forthcoming book, Remote Research
    15% discount on all Rosenfeld Media books
    A newfound confidence in conducting your own remote research!

    Register now at: (Space is superduper limited.)

    Hope to see you there!

    Bolt | Peters User Experience

    Read Chapter One of Remote Research!

    Posted on

    Hello, readers! We’ve decided to post, right here on this blog, a working draft of the first chapter of our book (minus pictures and diagrams), which is about when and why you should do remote research, and when lab or in-person research is more appropriate. We’d love for you to read it, search through it for hidden messages, tweet about it, tattoo it across your back in its entirety, et cetera. Since it’s still in draft form, we encourage you to send us your feedback–we expect the final draft to be different, and probably shorter.

    Hope you enjoy! – Tony


    Chapter 1: Why Remote Research?

    Up until a few years ago, lab research was the only game in town, and as with most industry practices, its procedures were developed, refined, standardized, and eventually became entrenched in the corporate R&D product development cycle. Practically everything nowadays gets tested this way: commercial websites, professional software, even video games.

    Part of the appeal of lab-based user research was that it provided a scientific (or at least scientific-seeming) basis for making decisions by using observational data, instead of someone’s error-prone gut instincts. Stakeholders appreciate the accountability, reliability, and precise metrics of properly managed lab research. On top of that, the influence of market research has placed a high premium on understanding user opinions, which has made moderated focus groups practically synonymous with user research to most people (see sidebar). But lots of UX practitioners continue to do lab research just because it’s what people have been doing for a long time.

    Sidebar: Market Research vs. UX Research

    The first important distinction to sort out is the difference between market research and user experience (UX) research. Market research is by far the more commonly performed research; when most organizations conduct on people, they’re doing market research. Market research takes the lion’s share of corporate dollars; UX research comprises just a fraction of that.

    This book is about UX research: studying behavior, not preference.

    The main difference between the two fields is that market research focuses on opinions, while UX research examines behaviors. A market research study might have goals like:

    • “Determine how users respond to the brand”
    • “Identify different segments’ color preferences for the homepage”
    • “See if users like the new mascot”
    • “Determine what users enjoy most and least about our site”

    The goals of a UX study, on the other hand, might sound more like:

    • “Can anyone actually use my interface?”
    • “Determine where users make errors in completing a purchase”
    • “See if users can successfully create a playlist”
    • “Understand why users aren’t logging in”
    • “See how users mentally organize different product categories”

    One important thing to keep in mind about market research is that it’s useless over small sample sizes. Opinions can vary widely across demographics and location, are very sensitive to the phrasing of the research questions, and can even change with relative frequency. Human behavior, on the other hand, is fairly consistent across demographics and location for most tasks, and recent research finds that 80% of usability flaws in a given software interface can be uncovered by five users in a moderated study, with sharply diminishing returns beyond the fifth user.

    Put it this way: ask five people what they think about how well a door is designed, and their comments might not overlap at all: one blames the condition of the hinges, another complains about the weight of the door, another about the loose doorknob, and so on. But if you observe five people walking through the door, and then two of them accidentally try pushing instead of pulling, then there’s your design flaw right there.

    So, to put it all together: you can do market or UX research on anything you want; it just depends on what you’re trying to find out.

    Sidebar footnote: “Determining Usability Test Sample Size”, Turner, Lewis, and Nielsen, 2006.

    [End of sidebar]

    So are we saying that lab research is dead?

    Heck no. Lab and remote research share the same broad purpose: to understand how people interact and behave with the thing you’ve made (let’s just call it “your interface”). Let’s not set up a false opposition between the two approaches–one isn’t inherently better than the other. They each have their own strengths and weaknesses, and there will be cases when one or the other will be more appropriate. Despite the versatility of remote research, there are still loads of reasons why you might want to conduct a lab study, most of which have to do with either security, equipment, or the type of interaction you want to have with your research participants. More generally, lab research is great when you need a high degree of control over some aspect of the session.

    Security is often a concern for institutions like banks and hospitals, which deal in sensitive information, or companies concerned with guarding certain types of intellectual property. If you’re testing a top-secret prototype, you obviously don’t want to let people access something from their home computer, where it could be saved or screen-captured. And on the flip side, you might also be doing a study on users who would be secretive about sharing what’s on their screen–government employees, doctors, or lab technicians, for instance. Either way, you’ll want to test the user into a controlled lab environment in order to keep the cat in the bag, especially if what you’re testing is so hush-hush that you’ve got to have the users sign a nondisclosure form.

    You might also want to do lab testing if your users are unable to screenshare, for whatever reason. Some studies (of rural users, cybercafe patrons, etc.) may require you to talk to users who don’t have reliable high-speed internet connections, who own computers too slow or unstable to use screensharing services effectively, or who have operating systems incompatible with the screensharing tools you’re using. These restrictions only apply to moderated studies, for which you need to see what’s on your users’ screens.

    Depending on what you’re testing, you may need also certain special software or physical equipment to run the study properly–this is most often the case with software that’s still under development. It can be a hassle to get users to install and configure tools to run elaborate software (though that’s not unheard of), and requiring users to have certain equipment can make recruiting difficult.

    Finally, some kinds of research will require you to study certain things about the user that are difficult to gather remotely. Eye-tracking studies have recently become used in UX research, and for that kind of study, you’d need to bring the users to the eye-tracking device. Other studies might require you to attend to the participants’ physical movements, which may be difficult to capture with a stationary webcam. And then there are multi-user testing sessions, in which a single research moderator facilitates many participants at once–screensharing is currently not well-suited to sharing multiple desktops at once, though some tools (e.g. GoToMeeting) make it relatively painless to switch from one desktop to another.

    Although these situations are all compelling reasons to conduct lab research, part of what we want to demonstrate in this book is that remote research is very broad and adaptable, and even if a study is conducted in a lab, elements of remote methods can be incorporated in order to enhance lab research methods–we’ll get to that in Chapter 9.

    Sidebar: The Case Against Remote Research: A Different Perspective

    Even though we’re obviously staunch advocates of remote user testing, we acknowledge that not all user research practitioners think it’s the hottest thing ever. Andy Budd is the creative director at Clearleft, a renowned London-based team of UX and web design experts, and he isn’t such a big fan of remote research methods for moderated studies. Here’s why.

    Full disclosure, here: I’ve done very little proper remote testing. The reason is that I’ve never found a credible need to do remote testing. There have been a few instances where it was a possibility, but looking at the actual factors involved, we found other ways of testing that didn’t require a remote approach. My issue is less about the negatives of remote testing; it’s more about the positives of live host testing. Now, that’s not to say we test in a lab. I find that lab testing gives a veneer of formality and scientific accuracy, which it quite often doesn’t have. And testing labs and equipment are often a lot more expensive, and tend to kind of bog things down. So we tend to take a grittier approach, either just a meeting room and a video camera, or a meeting room and some simple screencapture software. We see usability testing more as a formative tool than a summative tool, so we use it as part of the overall design process as a way of getting design insights and figuring out what the problems are and fixing them, rather than presenting formal reports to stakeholders and all that malarkey.

    We gain a lot of information by actually being in the room with people. They say 90% of communication is nonverbal. It’s about the subtle cues that people give away by their tone of voice or by their posture or by the tensing of their muscles. And I find that when you’re in a room with a real test subject, you pick up these kinds of signals more easily.
    My experience with simple online video conferencing such as Skype is that when you’re talking to two or three people online, even if you know them really well, the social conventions break down. You’re not able to read the cues and the body signals that tell you when one person stopped talking or when it’s OK for another person to start talking. You get this lag and these clashes where people talk over each other.

    And so, I think the virtual nature — the being out of the room with somebody — is very difficult to do well, and it’s possible that it’s to do with our ability to use these new tools and technology. I’m sure, 50 or 100 years ago when people first started using the telephone there was a similar kind of discussion over this really new, alien situation.
    So, I wouldn’t be surprised — give it 20 or 30 years — when video conferencing becomes a norm and we’ve learned and adapted how to understand and read these subtle cues better, it will become commonplace. But, I find that even today, that’s really difficult, so I think that there’s a lot of value being in a room with a person because you have the potential to lose about 90% of the information that’s coming through to you if you’re not.

    On the potential biasing effect of physical moderator presence

    Whatever kind of usability testing you do is always going to have a moderator effect, and it is very difficult to mitigate. I think whether you do it in person or virtually, the quality of the moderator and moderating will have a greater impact than the medium that the moderating is being done through. If you set up your task correctly and you’re asking the right questions — or preferably not asking questions at all, just letting the user talk aloud as you sink into the background — then I think those effects become negligible.
    You still have to be aware that the moderating might have accidentally led the test subject, but I don’t see these tests as high scientific, hermetically sealed university-level projects where you have to cut out every single possible area where there might be variance. These sorts of tests work much better as tools for getting quick information and then inputting it back into the system. Obviously, if we are writing a peer-reviewed scientific paper, any possible variance or anomaly should be picked up, because that’s how scientific papers are written. But for purposes of improving the click-through rates on a sign-up form, I think this slight possible observer effect has little impact and can be mitigated by having a good moderator.

    On the shortcomings of remote methods

    I think the big thing really is, as I mentioned before, you lose a lot of nonverbal communication, even using a webcam. Usability testing is all about empathy. It’s all about understanding and creating a connection. Start understanding their mindset and what they are going through. That kind of empathy is very difficult to build or create through a filter of web conferencing software. Whereas I think if you are actually physically in a room with somebody, it’s much easier to get that understanding.

    I think that’s the big issue, sort of, like an Uncanny Valley. It’s that gulf of miscommunication that makes it less attractive to me. I think there are instances where you should use remote moderated testing, quite often when it’s impossible to actually recruit users to a specific geographic location. Recently, we were working on a project for a South American Craigslist-style classified ad site, very big in Brazil. We sort of tossed and turned with the idea that we wanted to get this perfect, highly scientific usability test, that we needed to actually speak to people who are living in Brazil at the moment. Anyway we could do it, short of flying over to Brazil. We initially felt we ought to do some kind of remote testing, but then we realized that–this is partly luck–but we live in a fairly big city, and there is actually a large ex-pat Brazilian community and quite a large Brazilian student community here. So instead, we went to a Brazilian cafe and sat down and just chatted to actual Brazilian people who happened to be living in the UK.
    Some people used to say, “How can you possibly say that you are getting exactly the same experience sitting in a cafe talking to Brazilian people in Brighton as you would in a favela in São Paulo?” But again, I think that the difference is so subtle as to make little difference on the result of the usability test, particularly when you are testing in small numbers. Sure, if you’re doing a scientific test and looking across very large sample sets of potentially hundreds of people, these small minute effects are going to play a much bigger role.

    It also depends on what you are actually testing. There are some obvious cultural differences with the way people use the web, but there are also universal habits–registering for a service, noticing positioning, all these kinds of things. It’s very unlikely that the Brazilian community in London or Brighton are not going to pick something up the Brazilian community in Sao Paulo might. There is very little difference in terms of the knowledge required to use these services.

    When remote research is appropriate

    One time we considered remote testing was in a situation where we wanted to recruit people that had very specific domain knowledge that was quite rare. These were professional people who wouldn’t be interested in coming into the office, or having us come and bother them at their office for fifty bucks. They could only give us a very small amount of time, and that time was very precious to them. In that situation, we definitely considered using remote testing because we could go and find them at exactly the right time that was suitable for them. I think that would have been a perfect use of remote testing.

    But I think a lot of arguments people have over remote testing are probably unrealistic. I have read discussions online and talked to people who’ve said things like: “We’re a web designing firm in California but with creating websites for people in Florida, and obviously people in Florida are really different from people in California. Well, are they really? If you are selecting good users, then you are not selecting based on the geographic location; you are selecting based on a whole bunch of other things. That’s my opinion in a nutshell.

    On the use of technology in user research

    People often try and find technological answers to human solutions. I think a lot of the drive for remote usability testing is an attempt to find shortcuts. “Oh, it’s really difficult to find usability test subjects, so let’s get technology to help us out. It’s a pain in the arse having to travel to see someone; let’s get technology to sort it out.” Today I was reading an article on a UX mailing list where someone was saying, “How can we do remote ethnographic research? Can we get live cameras streaming?” And I thought, “Do a diary study.” Diary studies are probably the ultimate in remote ethnographic research: give somebody a diary and let them jot their notes there. You don’t need to set up some kind of remote webcam streaming back to Mission HQ. People love tinkering with technology because it makes them feel like superheroes. It’s something to show off: “Oh yeah, last night I did this live ethnographic survey with someone halfway around the world.” I believe in human solutions; I think technology is often used as a crutch.

    I think remote testing is still in its infancy. A lot of remote testing is based on the technology that’s available. In the early days, people were essentially lashing together screensharing and recording solutions with duct tape, but now we’re seeing the rise of more dedicated remote testing software. It’s preferable to have smooth, lightweight technology that you can just send to a novice user, double-click on it and it opens and it installs. But if you’re looking at capturing an average desktop screen, which is now above and beyond 1024×768, and you also want to capture the person’s reactions, you want to send audio and video down that pipe as well — that’s a very complicated engineering problem. You need good bandwidth to do this really well. So then you create artificial problems, because you’re limited to testing people who have got pretty good tech and decent bandwidth. And so that would probably prevent us going and doing remote testing with somebody, say, in a cybercafe in Brazil. Once we all get fat pipes and really fast supercomputers, I think moderated remote usability testing will be much easier.
    What I am kind of interested in is unmoderated [i.e. automated] remote testing, because it’s sort of a hybrid between usability testing and statistical analysis or analytics. The benefit is that you can test on a much wider sample set. And it also complements in-person, lab-based usability testing. But I find it a bit awkward calling it usability testing, because it doesn’t really match with my belief and my understanding of what usability testing is. It’s more like user-centered analytics.

    On the purpose of user testing

    I think the point is to develop a core empathetic understanding of what your users’ needs and requirements are, to really get inside the heads of your users. And I think the only way you can do that is through qualitative, observational usability testing. There’s lots of quantitative tools out there, eye-tracking and all this stuff that can tell you what’s happening, but it can’t necessarily tell you why it’s happening. With any kind of unmoderated testing, you can get a surface-level understanding of what’s happening through the facts and the figures, but I think that qualitative connection gets missed. You can ask people, “Why did you do this?” and people will say, “I did this because…” But I think the benefit of proper usability testing is that it’s observed behavior, not what people think they’re doing.

    We’ve all done usability tests where you watch people struggle and have a real problem doing something, and you know they’re having a problem because you’re observing it. Then, when you ask them at the end of the test, they’ll go, “Oh, yes. It was fine. It was easy.” We did a usability test last week where a user thought the had purchased a ticket, and he hadn’t, and he’d left thinking he’d purchased a ticket. So he’d thought he’d succeeded, and if he’d told you, “Yes, I’ve succeeded,” you would have been mistaken. Watching and observing what users do is very enlightening.

    Frankly, it’s much easier for people to learn from direct experience than it is through analyzing statistics. I think statistics can be potentially dangerous, because as we all know, they’re open to a lot of interpretational bias, and then a lot depends on how those statistics are presented. So there’s nothing like actually watching people, and being in the same room as them.

    [End of sidebar]

    What’s remote research good for?

    Again, most studies can successfully be done either in the lab or remotely; but, just as there are times when lab testing is more appropriate, there are also times when it makes more sense to use remote research methods. The strengths of remote research lie in its comparatively minimal setup requirements and its ability to reach anywhere computers can go: you can be anywhere, your participants can be anywhere. If you’re a lone-wolf consultant or a start-up team working out of a cafe, it can be hard to get the undistracting office space you need to do lab testing. If it’s too much bother to set up a proper lab, go remote; all you’ll need is a desk.

    Even if you have a lab, the users you want to talk to may not be able to get to it. This is actually the most common scenario: your interface, like most, is designed to be accessed and used all around the world, and you want to talk to users from around the world to get a range of perspectives — will Chinese players like my video game? Is my online map widget intuitive even for users outside of Silicon Valley? Big companies like Nokia and Microsoft are often able to conduct huge ambitious research projects to address these questions, coordinating research projects in different labs around the world, flying researchers around in first-class. If you aren’t Nokia or Microsoft, well, you have our deepest sympathies: you probably don’t have the cash for a million-dollar international longitudinal Gorillas-in-the-Mist project. Remote research is a no-brainer solution: if you can’t get to where your users are, test them remotely.

    Beyond travel expenses, there are other costs associated with lab testing that may be reduced or eliminated when you test remotely. With live recruiting methods, you can get around third-party recruiting costs, and because the recruiting pool is larger, you may not have to offer as much in the way of incentives as you might otherwise in order to attract enough participants. Because sessions are conducted through the computer, software exists to replace costly testing accessories, such as video cameras, observation monitors, and

    Closely related to the issue of money, as always, is time. Nearly all existing recruiting methods take a number of weeks: recruiting agencies usually require a couple of weeks to gather recruits, and writing out precise recruiting requirements and explaining the study to them can eat up a lot of time. Getting users from your own mailing lists can be faster and moderately effective, but what if you don’t have one? Or what if you’ve overfished the list from previous studies, or you don’t want to spam your customers, or you’re looking to test people who’ve never used your interface or heard of your company before? In any of these cases, recruiting your users online makes a lot of sense, since it allows you to do your recruiting as research sessions are ongoing. (We teach you how to do all this in chapter 3.)

    Some interfaces just don’t make any sense to test outside of their intended usage environment. If you need the user to have all their photos and videos to use in a software or web tool, it’s going to be a pain in the butt to have them bring in their laptop or media to a lab. Or, for instance, you’re testing some new functionality on a recipe website that guides users step-by-step through preparing a meal: it wouldn’t make much sense to take people out of their kitchen, where they’re unable to perform the task of interest. When this is the case, remote research is usually the most practical solution, unless (as mentioned earlier) the users lack the equipment for testing is in your lab.

    Sidebar: Why I Went Remote

    Old habits die hard, but for any number of reasons — cost, convenience, international testing — a handful of former lab researchers have switched to remote methods and never looked back. Here to talk with us about why he decided to go with remote methods after many years of lab testing is Brian Beaver, award-winning creative director for Sony’s ImageStation.

    On going remote

    I have quite a bit of experience, either organizing user research sessions or participating in them, especially here at Sony. Prior to working at SonyStyle, I helped to support their photo sharing site. Along the way I worked on a number of non-web projects that involved either product or product UI or packaging, and we’ve had a number of opportunities to do some lab usability testing for those types of things as well. My research experience is pretty varied, and the outcomes are always interesting, but I’m a big fan of remote usability testing. It seems to give me the best bang for my buck.

    I’d read about it at some point a few years back, and had done some work with Adaptive Path when they had a focus on usability. I ended up talking to Peter [Merholz] at Adaptive Path, who steered me towards Bolt|Peters, and when Nate shared his remote approach, I knew we were in sync because a lot of the pain points and skeptical raised-eyebrows around some of the results we’d obtained in previous lab testing instantly diminished with remote usability testing.

    The pain points always revolved around recruiting. With a website you have such a diverse poll geographically that it can be challenging to get a core group of your users in one location together. Sony tends to be very protective of their customer information, and wouldn’t share it out to a research company for the purposes of recruiting, so we’d have to take on that task ourselves, which was always sort of painful.

    The raised-eyebrows were always about participant motivation, and validity of the recruiting process and methodology. There were always questions: how valid are these findings? Are these real users? But when you intercept users who are on your website who are in the process of performing a task, those questions evaporate.

    On overcoming reservations about remote methods

    Nate was very thorough about providing a lot of documentation and case studies about the methods, so I think my apprehension was assuaged by those case studies. From our perspective, I think the ability to make the process of usability social in a way that wasn’t possible before was a really positive thing.

    On participating in remote research

    In the past we’d invite our business partners or other folks who had some stake in the testing to the lab, but it was always difficult to get them to take time out of their day to travel to the lab, and it was a big production. But if they can just bring their laptop to a conference room down the hall and just be there to listen in, it’s fantastic. You’d get the same advantages if you had everyone available to go to the lab testing, and the level of engagement is a lot greater. By virtue of having a lot of stakeholders in the room, you get more diverse viewpoints, and the interaction between us observers and the moderator tends to be lively — we keep that chat box going throughout that whole interview. The ability to observe and discuss things as they come up and then immediately give feedback to the moderator is really powerful.

    As someone who is in the middle of the process, we’ve got customers and the usability to consider on the one hand, and on the other hand we have a lot of business stakeholders who have really strong opinions about how things should be done. And so a lot of times there’s been some tension around how to deliver the recommendations, because we know there’s going to be contention about findings that fly in the face of our preconceived notions about how things should be done on the website. So having everyone in the room watching the feedback and engage with the process is really powerful. As a technology company, our product marketing groups tend to get wrapped up in talking about technology. We were recently in the middle of a digital camera usability session and were asking the user to go through the features and content we have on the site, and the customer’s going through it and he’s like, this all seems really impressive but I really just want to know if it takes great pictures. And you see this light bulb go off above the product marketing people’s heads. We’re so close to this that we have absolute myopia. It was such a cutting comment, and it was a real eye opening moment.

    On benefiting from remote methods

    We’re into our third major round of remote usability. In our second study, which was about TVs and the TV shopping process. Sony has a broad line of TVs, somewhere around 9 to 10 different series, and each has a dozen size options, so you have a lot of choices. [During the study] there was an “A-ha!” moment, a phenomenon we haven’t seen before: I don’t know if it was because of the advent and proliferation of tabbed browsers, but people would often have half-a-dozen to a dozen sites already open at a time, and they were seamlessly going between sites like Engadget, CNet, Gizmodo, Sony, Samsung, Circuit City, Best Buy, and they were really taking advantage of the tabbed browsing capability to cross-shop and gather information from consumers and trusted editorial services. We simply wouldn’t have gotten that insight from a lab environment, because we wouldn’t have been intercepting people in their natural browsing environment: instead, they’d sit down, have the browser open, and they’d go. So that behavior would have been completely missed.

    The outcome was that, knowing that customers are looking not only for customer reviews but trusted, third-party editorial content, we’re actively pursing ways to bring that content into the SonyStyle site, so that from within the interface they can access that info, instead of relying on the multi-tab approach. We’re still very much in the middle of the design phase, but we’re looking at a lot of options, from having an XML feed that comes in that feeds content from these trusted sources, to talking to companies that aggregate editorial reviews and can feed in that content to SonyStyle. In the past, if a product was awarded an editor’s choice, we would have put that on the page as a badge of honor, but I doubt that we would have ever taken it as far as to actually include the entire editorial content alongside the product, if it hadn’t been for this study.

    Advice for those considering going remote

    If we’re talking about remote testing for websites, from my perspective it’s really a non-choice. Having the benefit of intercepting users that are already coming to your site in order to perform a task already puts you so far ahead of the game because the motivation is there, you’ve got them captive, and you just gain so many more insights, compared to creating an artificial environment with artificial motives. So you know from the quality and granularity of the results you’re going to get, it’s so much richer than if given the choice. I’ll never go back to lab testing again. And there’s the cost savings. Clearly, overall, it’s a less costly proposition. You avoid all the travel costs. There’s always a dud user in every batch of research participants, and the great thing with usability testing is, if you start talking to someone you want to cut loose, it’s no harm; you can move on to the next person as the recruiting form is literally filling up before your eyes.

    Based on some of my previous design responsibility, I’ve also been involved with a lot of product design projects, or projects where you may benefit from observing body language and reactions to tactility of surfaces, or form factors or things like that. I’m not convinced that you couldn’t do that remotely, but I just think that there are benefits of actually having somebody right there, so that you can observe their physicality.

    [End of sidebar]

    Remote testing CANs and CAN’Ts

    If you have the gumption, you can test almost anything remotely. There are ways to get around nearly all of these obstacles, including the ones mentioned above, in one way or another (see: chapter 10), but it’s all about what’s most practical: if it’s significantly cheaper / faster / less of a hassle for you to just bring people into the lab, then by all means, bring ’em in. Sometimes it can be a tough call; users in the developing world may have limited access to the internet, for instance, so you’d have to decide whether it’d be easier to fly over and talk to users in person, or find people from that demographic in your area, or arrange for the users to be at a workable internet kiosk to test them remotely. For clarity’s sake, let’s talk about some clear-cut cases of things you can and can’t test remotely.

    Remote testing is a no-brainer for websites, software, anything that runs on a desktop computer–this is the kind of stuff remote research was practically invented to test. The only hitch is that the participant needs to be able to use their own computer to access whatever’s being tested. Other websites besides your own are a cinch: just tell your users during the session to point their web browsers to any address you want. If you’re testing prototype software, there needs to be a secure way to digitally deliver it to them; if it’s a prototype website, give them temporary password-protected access. If it’s just too confidential to give them direct access on their computer, you can host the prototype on your own computer and use remote access software like VNC or GoToMeeting to let them have control over the computer. There’s almost always a way to do it.

    The stuff you test doesn’t even have to be strictly functional. Wireframes, design comps, and static images are all doable; we’ve even tested drawings on napkins (not kidding). Just scan them in to a common image format and slap them onto a website. Make sure the user’s browser doesn’t automatically resize it by using a plain HTML wrapper around each image. There are also plenty of software solutions like Azure and Fireworks, which can take your sketches or photos or oil paintings and help you convert your images to HTML in minutes.

    Programs that require users to enter personal information? You can do that, but if it’s at all possible, give your participants a way to enter “dummy” information wherever they’d be otherwise required to enter sensitive or personally identifying information. If you must use sensitive information, be sure to obtain explicit consent right at the beginning of the testing session; you don’t want to spend 20 minutes on the phone with a user only to terminate the study because the user won’t enter a real address.

    It’s still a little difficult to capture usage behaviors outside of a physical desktop environment, so mobile devices and physical products still have a ways to go. There is still hope, but you’ll have to come up with a more creative solution, which may or may not be worth it; see chapter 10 for creative approaches to and applications of remote testing.

    Sidebar: Case Study: Lab vs. Remote

    By Julia Houck-Whitaker, Adaptive Path (and Bolt|Peters alum)

    In January 2002, Bolt|Peters conducted two parallel usability studies on the corporate website of a Fortune 1000 software company. Both studies used identical test plans, but one was executed in a traditional usability laboratory, and the other was conducted remotely using an online screen-sharing tool to observe user behavior.


    Our comparison of methods showed key differences in the areas of time, recruiting, and resource requirements, as well as the ability to test geographically distributed user audiences. The table below provides a snapshot of the key differences we found comparing the two usability testing methods. There appeared to be no significant differences in the quality and quantity of usability findings between remote and in-lab approaches.

    # of Users 8
    Recruiting Method Recruiting Agency
    Recruiting Duration 12 days
    Testing Duration 2 days
    Location Pleasanton, CA
    Avg. Session Duration 85.6 min
    Total Key Findings 98
    Approximate Cost $26,000
    Deliverables Report, highlight video

    # of Users: 8
    Recruiting Method Online live recruiting
    Recruiting Duration 1 day
    Testing Duration 1 day
    Location CA, OR, NY, UT
    Avg. Session Duration 51.5 min
    Total Key Findings 114
    Approximate Cost $17,000
    Deliverables Report, highlight video, survey responses

    [Above: Overview comparison of lab and remote methods]

    Detailed Comparison of Methods
    The table below breaks down the process for each of the recruiting, testing, and analysis phases. The left-hand column describes the lab study details, the right-hand column describes the remote study details.

    Lab Recruiting
    [bl] 3rd party recruiting agency schedules users
    [bl] Agency selects users based on recruiting criteria
    [bl] Only local users are selected to avoid travel expenses
    [bl] Duration: 12 days

    Recruiting for the lab-based study was outsourced to a professional recruiting agency. Ten users were recruited, screened and scheduled by G Focus Groups in San Francisco, including two extra recruits in case of no-shows. Recruiting 8 users through the recruiting agency took 12 days. Agency-assisted recruiting successfully provided seven test subjects for the lab study; the eighth recruit did not fulfill the testing criteria.

    Remote Recruiting
    [bl] Pop-up screener on the website
    [bl] Practitioner selects users based on responses to screener questions
    [bl] Ability to cost-efficiently recruit globally distributed users
    [bl] Duration: 1 day

    Recruiting for the remote study was conducted using an online pop-up from the software company’s corporate website. The recruiting pop-up, hosted by the researchers, used the same questions as the G Focus Groups’ recruiting screener. Users in both studies were selected based on detailed criteria such as job title and annual company revenues. Respondents to the online screener who met the study’s qualifications were contacted in real-time by the research moderators. The online recruiting method took one day and yielded eight users total from California, Utah, New York, and Oregon. Normally the live screener requires 4 days of lead time to setup, but in this case it was completed for a previous project so setup was not necessary.

    Lab Environment
    [bl] User in controlled lab environment
    [bl] Test limited to users on location
    [bl] Practitioner sees user screen on her computer
    [bl] Practitioner sees user through a one-way mirror
    [bl] User and practitioner interact via microphone and speakers
    [bl] User audio, screen video and facial expressions are captured

    The lab study was also conducted from the software company’s in-house usability lab. The recruits for the lab study went to the lab in Pleasanton, CA to participate in the study and used a Windows PC to participate. In addition to users’ audio and screen movement capture, user’s facial expressions were also recorded. The video track of user facial expressions did not yield additional usability findings.

    Remote Environment
    [bl] User in native environment
    [bl] Ability to test globally distributed users from one location
    [bl] Practitioner sees user screen on her computer
    [bl] User and practitioner interact via telephone
    [bl] User audio and screen video are captured
    [bl] Lab can be set up at client site, or client can observe remotely

    The remote usability study was conducted using a portable usability lab from software company’s headquarters in Pleasanton, California. The live recruits participated from their native environments and logged on to an online meeting allowing the moderators to view the participants’ screen movements. The users’ audio and screen movements were captured to be made into a highlights video.

    Lab Findings
    [bl] High quality of usability findings
    [bl] 98 usability issues uncovered
    [bl] Highlights video with picture-in-picture

    The lab study uncovered similar issues of similar quality and usefulness to the client when compared with the remote study results. The lab method uncovered 98 key findings, slightly lower than the remote results (but not to a statistically significant degree).

    Remote Findings
    [bl] High quality of usability findings
    [bl] 116 usability issues uncovered
    [bl] Highlights video with picture-in-picture
    [bl] Highlights video with audio

    The remote study uncovered usability issues of high value to the client. The number of key usability findings was slightly higher compared to the in-lab study. The difference in the number of key findings is statistically negligible.

    Moderated vs. Automated
    Okay, you’ve pondered your situation, and you’ve decided it’s worth a shot to go with a remote research study. Feels good, doesn’t it? The first thing you should know is that remote research can be roughly broken out into two very different categories, which we label here as moderated and automated (or unmoderated) research.

    In moderated research, the researcher acts as a moderator (a.k.a. facilitator) who speaks directly to the research participants; one-on-one interviews, ethnographies, and group discussions (yes, including the infamous focus group) are all examples of moderated research formats. Moderated studies are conducted in real-time, with everyone involved (researchers, participants, and observers) in attendance at the same time. The main benefit of moderated research is that you can gather very in-depth qualitative feedback: not just opinions, but physical behavior, tone-of-voice, facial expression, and so on. A moderated discussion also allows the moderator to probe on new subjects as they come up over the course of a conversation, which makes the research more flexible in scope, and enables the researcher to explore ideas and usages that were unforeseen during the planning phases of the study – these are sometimes called “emerging topics”, and researchers should pay close attention to these, since they often identify things that the development team has been overlooking.

    Automated (or “unmoderated”) research is the flipside of moderated research: the researcher has no direct contact with the participant, but instead uses some kind of tool or service to gather feedback or record user behaviors automatically. Typically, unmoderated research is used to gather quantitative feedback from a large (i.e. hundreds or more) sample. There’s all sorts of feedback you can get this way: users’ subjective opinions and responses to your site, user clicking behavior, task completion rates, how users mentally categorize elements on your site, and even your users’ opinions on competitors’ websites. In contrast to moderated research, automated research is usually done asynchronously: first the researcher designs and initiates the study; then the participants perform the tasks; then, once all the participants have completed the tasks, the researcher gathers and analyzes the data.

    In general, automated research is useful for quantitative research, when you want answers to very specific questions, or wish to examine how users behave on very simple tasks over a large sample. Moderated research is good for qualitative research, observing how people practically use a multi-functional tool (Photoshop, Outlook) or perform a complex task or process with no rigid sequence of tasks (like browsing and researching interfaces on Amazon) over a small pool of users. There can be some overlap between the two approaches, but that’s basically how it breaks down.

    Moderated, automated–do I really have to choose?

    Hey, that’s a good point — the answer is no, you don’t have to choose between moderated and automated testing, or even between lab and remote methods. Remote testing can cover a lot of needs, and after reading this chapter, you should have a pretty good idea of whether or not it’ll work for your project. So if that’s the case, give it a try–you can always go back to lab testing if it suits you better, and if you’ve got lots of time and inclination, you can even try many different methods to test the same interface, using the findings from one study to support or add nuance to another. This approach is recommended for really large-scale projects (testing a new version of a piece of complex software, or an overhauled IA, etc.) where you just want to gather every bit of information you can, but for the average study, this is probably overkill.

    In search of remote research experts!

    Posted on

    Are you an experienced UserVue, Keynote, WebEx, UserZoom, OptimalSort, or MindCanvas (RIP) user? If you consider yourself a remote research expert and have conducted multiple projects using remote methods (moderated or automated, doesn’t matter), we want to hear from you: if you’d be interested in technical editing, advising, or contributing, email us at!

    But even if you’re no expert — even if you have no idea what remote research is — we want to hear from you: what do you want to learn about remote research? What’s kept you from trying it, in the past? What kinds of problems do you have when you conduct your research studies? Leave us a comment!

    Remote Research: It’s On

    Posted on

    Welcome, folks! We’re glad to announce our new book, tentatively titled “Remote Research”, which, you might be shocked to hear, is about remote user research. Usually, when we try to explain what remote research is, most people (and even professional user researchers) will sort of tilt their heads in a puzzled beagle-like way: they’ve heard of user research, and they know what a focus group is, but what exactly is remote user research? How can you research users if you’re not there to watch them? Is it like a telephone survey?

    Mostly, we’re writing this book to avoid ever having that conversation again. We’re going to explain what it is, how it might benefit you, and how to do it, in a quick and easy-to-digest 150-page guide. Although remote research as a whole is still relatively new and unknown, it’s a well-developed field with established methods and standard practices. We’ll explain the strengths and disadvantages of different types of remote research; we’ll guide you through all the different tools, services, and resources you can use to get a study going; and we’ll walk you step-by-step through designing, planning, and conducting a study, so you can do it yourself with minimal hassle.

    So, who’s behind this book? That is a delightful question. Nate Bolt is the president, CEO, judge, jury, and executioner of Bolt|Peters, a user research firm in San Francisco which has been specializing in remote research for nearly a decade. Tony Tulathimutte is Bolt|Peters’s writer, blogger, and retro-minimalist French microhouse DJ, and he (or, I should say, I) will be co-authoring the book and taking charge of blog posts like these.

    We’ve got a great idea of where we want to go with our book, but since we’re still at an early stage of drafting, we would absolutely love to hear feedback, requests, and issues you’d like to see raised in the book. More than anything, we aim to please. Keep your RSS reader pointed at this blog for updates!