Eye Tracking the User Experience Blog

A Practical Guide to Research

Posts written by Aga Bojko

  • Just in Time for the Holidays

    Posted on

    So much has happened since my last blog post. I’ve moved from Chicago to San Francisco. I’ve done research on cheddar cheese, buttons, and fraud reporting. I’ve touched a human brain. I’ve petted an adult cheetah (and he purred). I’ve binge-watched three and a half seasons of The Walking Dead.

    Somewhere between studying cheese and watching zombies, I also finished the eye tracking book that I started writing in 2011. It officially went on sale yesterday!

    My hope is that the generously illustrated and peppered with examples 320-pager will become a useful resource for those conducting eye tracking research aimed at evaluating designs. Pro tip: With its vibrant cover and the pleasant smell of fresh print, the book can double as an attractive stocking stuffer.

    Thanks to everyone for your continued support. I’m excited to hear your feedback.

    Happy Holidays!

    The Truth About Webcam Eye Tracking

    Posted on

    By now everyone has probably heard of webcam eye tracking. If you haven’t, it is exactly what it sounds like – detecting a person’s gaze location using a webcam instead of a “real” eye tracker with all the bells and whistles, including infrared illuminators and high sensitivity cameras.

    Because webcam eye tracking doesn’t require any specialized equipment, participants don’t have to come to a lab. They are tested remotely, sitting in front of their computer at home, wearing Happy Bunny pajamas and trying to keep their fat cat from rolling onto the keyboard. The only requirements are: a webcam, Internet connection, and eyes to track.

    Companies that provide webcam eye tracking services include GazeHawk and EyeTrackShop (YouEye has had a website for as long as I can remember but their webcam eye tracking doesn’t seem to be available yet). These companies recruit participants, administer the study, create visualizations, and report data by area-of-interest.

    After reading about a few of their studies, including EyeTrackShop’s recent Facebook vs. Google+ hit, I decided to experience webcam eye tracking first-hand. I signed up to be a participant.

    One day I was sitting in a comfy leather chair at a Starbucks with my laptop in my lap, working on chapter 10 of my book on eye tracking, when an “Earn up to $4” email came through to let me know I had a GazeHawk study waiting for me. Because any distraction is a welcome distraction when I’m writing (in case you’re wondering what’s taking me so long), I decided to take an educational break and follow the link.

    The first page informed me about how to prepare for the test.

    119-1.jpg

    All screenshots courtesy of GazeHawk (thanks!).

    The lighting at Starbucks seemed to match the “DO” pictures, so I proceeded. I was then asked for access to my webcam, which I promptly granted, only to discover that even a webcam adds 10 pounds.

    121-3.jpg

    I centered my face in the window and was ready for calibration.

    122-4.jpg

    I followed the red dot as instructed.

    123-5.jpg

    While my “results” were being uploaded (which seemed like a lifetime), I managed to check Twitter (follow me!), finish my pumpkin bread, send a few text messages, and help a kid plug in his laptop into the outlet behind my chair. I then realized the instructions on the screen said not to move my head during the upload. Oops. Even though there was no mention about moving the webcam (or the whole laptop), I figured I shouldn’t have done that either. Good thing I didn’t get up to get a refill!

    When the screen was finally ready for me to start testing, I was several minutes older and my calibration was probably already invalid, but since the interface didn’t request a do-over, I continued with the study.

    124-6.jpg

    Step 4 of the process provided a scenario (auto insurance shopping) and instructed me to press Escape when I was done. I wasn’t quite sure with what I was supposed to be finished in order to press Escape but I clicked on “Start Testing” anyway. An insurance company homepage appeared.

    A second or two later my Outlook displayed an email notification, and the compulsive email checker that I am, I opened the email. I also had to open it on my BlackBerry or the red light would have kept blinking for a while and I can’t stand that.

    I came back to the insurance homepage and looked around for a while but nothing was clickable – the page appeared to be static. I then remembered the instructions I saw previously, and the Escape key saved the day.

    Based on this experience, a few other studies I participated in, and my conversations with people involved with GazeHawk and EyeTrackShop, I made a list of what I believe are the main limitations of this new technology:

    1. Webcam eye tracking has much lower accuracy than real eye trackers. While a typical remote eye tracker (e.g., Tobii T60) has accuracy of 0.5 degrees of visual angle, a webcam will produce accuracy of 2 – 5 degrees, provided that the participant is NOT MOVING. To give you an idea of what that means, five degrees correspond to 2.5 inches (6 cm) on a computer monitor (assuming viewing distance of 27 inches), so the actual gaze location could be anywhere within a radius of 2.5 inches from the gaze location recorded with a webcam. I don’t know about you but I wouldn’t be comfortable with that level of inaccuracy in my studies.
    2. What decreases the accuracy of webcam eye tracking even further is when participants move their heads, and the longer the session, the more likely this will happen. Therefore, webcam eye tracking sessions have to be very short – typically less than 5 minutes, but ideally less than a minute. Studies conducted with real eye trackers, on the other hand, can last a lot longer with little impact on accuracy.
    3. Currently, webcam eye tracking can handle only single static pages. All four studies I have participated in and a few I read about were one-page studies. Without allowing participants to click on anything and go to another page, the applicability of webcam eye tracking is limited. This constraint also lowers the external validity of the studies.
    4. The rate at which the gaze location is sampled is much lower for webcams than real eye trackers. The typical frame rate of a remote (i.e., non-wearable) eye tracker is between 60 and 500 Hz (i.e., images per second). The webcam frame rate is somewhere between 5 and 30 Hz. The low frame rate makes analyzing fixations and saccades impossible. The analysis is limited to looking at rough gaze points.
    5. Due to imperfect lighting conditions, poor webcams, on-screen distractions, participants’ head movement, and overall lower tracking robustness, out of every 10 people who participate in a study, only 3 – 7 will provide sufficiently useful data. While this may not be a problem in and of itself because of very low oversampling costs, what makes me uncomfortable is not knowing how the determination to exclude data from the analysis is made. Data cleansing is important in any study but it is absolutely critical in webcam eye tracking. Exclusion criteria should be made explicit for webcam eye tracking to gain trust among researchers.

    Regardless of its limitations, the contribution of webcam eye tracking to research is undeniable. Using webcams made it possible to conduct remote eye tracking studies and enjoy the benefits of remote testing, such as low cost, fast data collection, and global reach.

    While webcam eye tracking is not a substitute for in-person research that uses real eye trackers, it is a cheap option if you’re looking for a quick and dirty indication of the distribution of attention on a single page (e.g., your homepage or an ad). As the technology and data collection processes employed by these services continue to improve, the applicability of webcam eye tracking will expand. Will it ever replace eye tracking as we know it? Doubtful, but I will keep an eye on it anyway.

    Eye Tracking Without Eyes

    Posted on

    “Participant-free eye tracking” has been around for a while but is still attracting quite a bit of attention. Websites such as EyeQuant, Feng-GUI, and Attention Wizard allow you to upload an image (e.g., a screenshot of a web page) and obtain a visualization (e.g., a heatmap) showing a computer-generated prediction of where people would look in the first five seconds of being exposed to the image. No eye tracker, participants, or lab required!

    These companies claim a 75 – 90% correlation with real eye tracking data. Unfortunately, I couldn’t find any research supporting their claims. If you know of any, I’m all ears.

    To satisfy my curiosity about the accuracy of their predictions, I submitted an eBay homepage to EyeQuant, Feng-GUI, and Attention Wizard and obtained the following heatmaps:

    58-Heatmap_FengGUI.jpg

    Attention heatmap simulation by Feng-GUI

    59-Heatmap_AttentionWizard.jpg

    Attention heatmap simulation by Attention Wizard

    60-Heatmap_EyeQuant.jpg

    Attention heatmap simulation by EyeQuant

    I then compared these heatmaps to the initial five second gaze activity from a study with 21 participants tracked with a Tobii T60 eye tracker:

    61-Heatmap_Tobii.jpg

    Attention heatmap based on real participants (red = 10+ fixations)

    First, let me just say that I’m not a fan of comparing heatmaps just by looking at them because visual inspection is subjective and prone to error. Also, different settings can produce very different visualizations, and you can’t ensure equivalent settings between a real heatmap and a simulated one.

    With that in mind, let’s take a look at the four heatmaps. The three simulations look rather similar, don’t they? But the “real heatmap” seems to differ from the simulated ones quite a bit. For example, the simulations predict a lot of attention on images (including advertising), whereas the study participants barely even looked at many of those elements. Our participants primarily focused on the navigation and search, which is not reflected in the simulated heatmaps.

    The simulations also show a fair amount of attention on areas below the page fold but the study participants never even scrolled! In addition to a heatmap, Feng-GUI produced a gaze plot indicating the sequence with which users would scan the areas of the page. The first element to be looked at was predicted to be a small image at the bottom of the page, well below the fold:

    62-Gazeplot_FengGUI.jpg

    Gaze plot simulation by Feng-GUI (the numbers in the circles indicate the order of fixations)

    I wish we could compare the simulated gaze activity to the real gaze activity quantitatively but that doesn’t appear to be possible. Even though Feng-GUI and EyeQuant provide some data (percentage of attention on an area of interest) in addition to data visualizations, it’s unclear what measure these percentages are based on:

    63-percentage.jpg

    Percentage of attention on the navigation predicted by EyeQuant

    But even based on me just eyeballing the results, I know I wouldn’t be comfortable making decisions based on the computer-generated predictions.

    The simulations have limited applicability and can by no means replace real eye tracking. They make predictions mostly based on the bottom-up (stimulus-driven) mechanisms that affect our attention, failing to take into account top-down (knowledge-driven) processes, which play a huge role even during the first few seconds.

    Computer-generated visualizations of human attention may work better for pages with no scrolling and under the assumption that users will be completely unfamiliar with the website and have no task/goal in mind when visiting it. How common is this scenario? Not nearly as common as the sellers of participant-free eye tracking would like us to believe.

    The Most Precise (or Most Accurate?) Eye Tracker

    Posted on

    To keep up with the developments in research and technology, I have a Google Alert set up for “eye tracking” OR “eyetracking” OR “eye-tracking.” The daily email comes to my Inbox at 11:30am, just in time for my browsing lunch (more fun than a working lunch, less fun than a non-working lunch). Today, nine out of the twenty results in the alert email mentioned Tobii Technology introducing the “most precise eye tracking solution” for mobile device testing:

    47-google_ alert1.jpg

    Most precise! Who could resist that?

    The solution (Tobii Mobile Device Stand) described in the articles is actually quite clever. I’m not sure why it made the news today because it’s been available for a while now. Maybe it was just this morning when they found it was “most precise.” I continued reading in suspense.

    To my disappointment, no explanation was offered for how this conclusion was reached. What’s more, I don’t even know what was meant by “precise.” I think the author was referring to the accuracy of the eye tracking solution but I can’t be sure. And that’s precisely where the problem lies – in the confusion between precision and accuracy (and people not realizing that there is confusion). Let me explain…

    The accuracy of an eye tracker is the average difference between what the eye tracker recorded as the gaze position and what the gaze position actually was. We want this offset to be as small as possible but it is obviously unrealistic to expect it to be equal to zero.

    Accuracy is measured in degrees of visual angle. Typical accuracy values fall in a range between 0.5 and 1 degree. To give you an idea of what that means, one degree corresponds to half an inch (1.2 cm) on a computer monitor viewed at a distance of 27 inches (68.6 cm). In other words, the actual gaze location could be anywhere within a radius of 0.5 inch (the blue circle below) from the gaze location recorded with an eye tracker with one degree of accuracy (the “X”):

    46-Accuracy.jpg

    Accuracy values reported in eye tracker manuals are measured under ideal conditions, which typically include, for example, testing participants with no corrective eyewear and taking the measurement immediately after calibration. During “real research,” the difference between the reported and actual gaze locations can be larger for participants wearing glasses or contact lenses or those who moved at some point following the calibration procedure.

    Precision (aka “spatial resolution”), on the other hand, is a measure of how well the eye tracker is able to reliably reproduce a measurement. Ideally, if the eye is in the same exact location in two successive measurements, the eye tracker should report the two locations as identical. That would be perfect precision.

    In reality, precision values of currently available eye trackers range from 0.01 to 1 degree. These values are calculated as the root mean square of the distance (in degrees of visual angle) between successive samples. Because the precision values reported by manufacturers are measured using a motionless artificial eye (pretty cool, huh?), tracking real eyes will exhibit less precision.

    The table below summarizes the relationship between eye tracking accuracy and precision. The cross indicates the actual gaze location, while the dots are gaze locations reported by the eye tracker.

    48-AccuracyPrecisionTable.jpg

    All in all, the “most precise eye tracking solution” was probably just a poor choice of words but it gave me an excuse to talk about precision vs. accuracy and sound like I’m up to date on current events. I do what I can.

    You Are a *Real* Eye Tracking Researcher If…

    Posted on

    1. A part of you dies every time you see a heatmap in place of proper data analysis.
    2. You have in fact asked an eye tracker manufacturer to remove the heatmap feature from their software. (They didn’t.)
    3. You used to think eye tracking was cool but that was like ages ago.
    4. Your old eye trackers were lovingly named Dusk and Dawn, and you still remember the sleepless nights when technology just wasn’t what it is these days.
    5. It makes you genuinely happy when someone knows which fixation identification algorithm they are using.
    6. You cringe when people call counterbalancing “randomizing.”
    7. It affects your relationship when you discover that your significant other can’t be calibrated.
    8. You already came to terms with the fact that eye tracking is just not going to save the world.
    9. Your roller derby name is EYE KILL YOU.
    10. You got really excited about this list.

    Can you think of anything else?

    Don’t Boo the Eyeballs

    Posted on

    Writing about writing isn’t hard but the problem is that when you’re writing, you have no time to write about that. That’s my eloquent excuse for why I haven’t posted anything here in a while.

    I just finished Chapter 3. According to the outline, there are still nine more to go. Who wrote this outline and why are there so many chapters in this book?!

    After finishing each chapter, I celebrate by making a word cloud out of the text. I then post it on Facebook for my friends to see. I figured that would make them feel more involved. They could also see that I’m making progress, which means that they will soon be allowed to talk to me again.

    41-Chp1.jpg
    Chapter 1

    43-Chp2.jpg
    Chapter 2

    44-Chp3.jpg
    Chapter 3

    [Don’t they all make it look like each chapter is about the same exact thing?]

    So far the response has been lukewarm at best. Some people politely click on “Like” but no one would comment. I was really starting to suspect they didn’t care about eye tracking… But that all was about to change with Chapter 3. Chapter 3 word cloud got a comment!

    When my BlackBerry notified me of it, I was almost as excited as when I won an Elmo piñata in a raffle once. I immediately pulled over (don’t FB and drive, kids) and looked at the comment. It said “Boooooooooooooo, boo! Eyeballs! Eyeballs!” Seriously.

    Hmm. Maybe after the next chapter I will just get a massage or have my car detailed to celebrate.

    If the word clouds didn’t give you enough preview of the content of the first three chapters, here are some random facts for you:

      • When buying an eye tracker, UX practitioners (and market researchers) don’t seem to be very particular about its technical specs. They care more about the system’s ease of use, efficiency of analysis tools, and visualizations.
      • Eye tracking beats card sorting (4 : 1) in the number of monthly Google searches in the world. Usability beats eye tracking (5 : 1). But Katy Perry and Justin Bieber beat them all. By a lot.
      • Manufacturers are focusing on improving their wearable eye trackers. SMI will soon be releasing glasses with binocular tracking and hi-res scene camera. ASL is working on the next generation Mobile Eye. We live in exciting times.
      • Double Stuf Oreos have been around as long as I have been alive (if this seems irrelevant, stay tuned…)
      • Studies have shown that gaze-cued Retrospective Think-Aloud (RTA) protocol provides more feedback than Concurrent Think-Aloud protocol. But when gaze-cued RTA was compared to video-cued RTA (with no gaze overlay), the amount of feedback was comparable. Ha.
      • Cover your eyes after you have been talking to someone face to face for a while and ask them what color your eyes are. Chances are they won’t know. This is an example of how you can look at something but not necessarily register everything about it (i.e., “looking without seeing”).
      • Many practitioners decide to use eye tracking in a study before identifying the research questions. Good or bad? [HINT: You cannot possibly know which methods to use without knowing what the study is trying to accomplish.]

    Back to the real writing now…

    Oh, if anyone has access to NZT, please get in touch.

    What Eye Tracking Can’t Do

    Posted on

    As I’m finalizing Chapter 2 (whew!), I’m noticing that we are sometimes so focused on explaining to others what eye tracking can do, that we have a hard time verbalizing what it cannot do. I’m specifically referring to the insight generated by eye tracking rather than software or hardware limitations.

    So, here is the fill-in-the-blank challenge of the month for you:

    The issues that eye tracking data can help explain typically originate in the suboptimal overall interface layout, specific element placement, graphic treatment, affordances, labeling, and messaging. However, eye tracking may not be as useful for explaining issues caused by _____________________.

    Can’t wait to hear your ideas.

    To Track or Not To Track

    Posted on

    I decided to tackle Chapter 2 first — “To Track or Not To Track” — the most controversial question when it comes to using eye tracking in our field.

    UX practitioners who have an opinion about eye tracking appear to be divided into two opposing camps: those who are pro eye tracking and those who are anti. The proponents seem to want to use eye tracking for pretty much everything, regardless of the study objectives. I have even heard of one usability professional who put their cat in front of an eye tracker just because it was cool. Oh wait, that was actually me. And no, Oreo didn’t calibrate. But I digress…

    The opponents, on the other hand, claim that eye tracking is just “smoke and mirrors” and doesn’t have much value. These people manage to ruin it for everyone every time they voice their opinions.

    I am sort of in the middle (if you ignore the cat incident). I believe there are about ten good reasons to use eye tracking in our field and definitely more than ten not to.

    And now, onto the audience participation segment of this post…

    Can you think of one really good reason to use eye tracking in a UX study? (Please don’t say, “To find out where people look.”) By a “good reason” I mean where insight gained from eye tracking data would help answer actual research questions and add value to the study. Conversely, can you think of a time when someone used eye tracking and you rolled your eyes, thinking, “What a waste of time?”

    It’s Chilly in Chicago

    Posted on

    It’s supposed to be -21°F with the wind chill tonight, so I better get this first blog piece out before my brain completely switches to preservation mode. Every Chicago winter is cold and yet people talk (mostly complain) about it every single day. The concept of habituation doesn’t seem to work here.

    But let’s talk about the book… I’ve been thinking about writing it for a long time and it’s finally really happening. Well, it says so on the Internet, so it must be true.

    My goal is to write a book that I wish I had when I was starting out with eye tracking. Because of the lack of resources on how to apply eye tracking to UX research, I was left to my own devices. I had made a couple of mistakes (well, maybe more than a couple) before I figured out when and how to use this new tool in the proverbial UX tool chest for good (rather than for evil, of course). Over the past ten years, I’ve written articles and book chapters on eye tracking but this book is a unique opportunity to put all of these insights, concepts, tips, and examples together, in a nice bathroom–or plane-friendly–format.

    I recently learned that I’m not a very accepting person (thanks, Mom). I should have guessed that by the fact that a lot of things have always bothered me–from people chewing gum on the train to bad research. I don’t think this book can make that much of a difference in people’s gum chewing behavior but it has a good shot at influencing future eye tracking research and making it better.

    If you are reading this, you must be interested in eye tracking and the book or are a secret admirer who has Googled my name. Either way, thank you and welcome to the working site for Eye Tracking the User Experience: A Practical Guide.

    I must go and bundle up now.