The Mobile Frontier Blog

A Guide for Designing Mobile Experiences

Posts written by Rachel Hinman

  • Alex Rainert: Head of Product at foursquare

    Posted on

    alex.jpg

    Alex is Head of
    Product at foursquare. Alex brings 12 years of product development experience
    and a multidisciplinary background to his work, with a focus on mobile, social
    and emerging technologies. Previously, he co-founded Dodgeball, one of the
    first mobile social services in the U.S., which he sold to Google in May 2005.
    He is a lifelong New Yorker currently living in Brooklyn with his wife,
    daughter, and dog. Alex holds a master’s degree from New York University’s
    Interactive Telecommunications Program and a bachelor’s degree in philosophy
    from Trinity College.



    How did you find
    your way into the mobile user experience space?

    I started getting interested in mobile when I attended
    New York University’s Interactive Telecommunications graduate program. I went
    to ITP in 2003 and 2004 when, believe it or not, Friendster was still en vogue.
    At that time, mobile technology was still super frustrating, but just starting
    to turn the corner to be a little bit more consumer friendly. ITP is an
    environment where students are encouraged to play around with the newest
    technology as part of the curriculum.

    I’ve always been interested in the idea of mobility and
    presence and how you can alter and enhance the way people interact with the
    world around them through technology in a non-intrusive way. At ITP, I started
    working with Dennis Crowley on an application called Scout. When students arrived at school, they had to swipe their ID
    cards to enter the building. We designed Scout around that core interaction.
    When students entered the building and swiped their card, Scout would drop them
    into a virtual space and then other students could query that space with
    questions like, “Is there anyone on the floor right now who knows action script?”
    Scout used the idea of presence and social connection to enhance the way
    students were interacting with each other based on space. In a lot of ways,
    foursquare has been a natural extension of that idea. We’ve tried to take
    something simple like a check in and build a rich experience around that.

    One thing that has been challenging – both with the
    early version of Scout and now foursquare – is that when you’re designing
    mobile experiences, it often feels like you’re trying to build things that help
    pull people over that hump to appreciate the richer experience that can come
    from designing around the intersection of mobile, social, and place.



    How do you pull
    people over that hump so that they can realize the value of the types of mobile
    experiences you’re designing?

    Part of pulling people over the hump is staying focused.
    The foursquare team is a group of people who have an incredibly active
    relationship with our phones. It’s easy to forget that not everybody has that type
    of a relationship with their mobile devices, and we have to always make sure
    we’re designing for those outside of our power user set.

    foursquare has always been a social utility at its
    core – find out what your friends are doing, tell your friends what you’re
    doing. We use levers like game mechanics (encouragement though points, the
    leader board, badges), recommendations, and specials to encourage engagement
    with the app. The challenge is tweaking all those different levers without
    losing site of what is central to the app’s experience – social and place.

    Now that people can carry around these powerful devices,
    and have access to rich content like maps, images, and video, it’s easy to
    think, “Oh, you can watch videos on it” or “We can create an augmented reality
    lens to enhance people’s view of the world.” We don’t want people to open up
    foursquare and be buried in there or force people to look ridiculous waving
    their phone in the air to see things. That’s definitely not the kind of
    experience we’re trying to create. We want to build something that people can
    pop open anywhere in the world and provides a quick, valuable interaction, and
    then it’s done. They can close it and get back to enjoying what it is they were
    doing.

    From day one, we’ve been building the foursquare
    experience for people to share things in the real world – to share rich
    experiences – and everything we’ve done has gone into building towards that
    vision. We feel that’s our beachhead – to keep plugging away and being able to
    focus on that area is our competitive advantage
    .


    There seems to be
    a theme in your professional history. Dodgeball, Scout, and foursquare all
    combine mobile, a sense of place with a social layer. Where does that interest
    come from?

    I think part of it is my personality. I’m personally
    drawn to things that bring people together. I love that a big part of my job is
    building the team that builds the product. I’ve been managing a softball team
    for 12 years, and I run a football office pool. I know the latter two are sort
    of trivial examples, but it’s coordinating groups of people around a thing, and
    that thing can be a fantasy baseball league, or that thing can be going out for
    happy hour. That’s something that’s been true about me my whole life.

     

    foursquare.jpg

    Do you think the
    fact that you have spent so much time in New York City has influenced your
    thoughts about mobile design?

    Definitely. New York is a unique place to design things
    around real-time place-based social interactions. Designing mobile experiences
    in New York is very much a gift, but it’s also a challenge not to get too
    swayed by that. Currently, foursquare has over 20 million users. We have to
    design for the next 40 million users and not the first 20 if we want to build
    the type of experience that I think we can, and a lot of those 40 aren’t
    necessarily going to be urban dwellers.

    You’ve been involved in the mobile industry for quite some time now. What do you think have been some of the biggest changes you’ve experienced?

    One big change is how easy it is to create experiences that use the social graph. With Dodgeball, there was no social graph to speak of. If you wanted to create a social experience, you basically had to rebuild it from scratch. There weren’t really graphs you could leverage like you can now with things like Twitter and Facebook. Now that it’s easier to bootstrap a friend graph, we can focus all our efforts on the experience we want to design on top of that. The fact that there’s a standard social graph designers can use to build social experiences is definitely a high barrier to entry that’s been removed.

    Also, the sheer number of people with high-end mobile devices is another big change. When I think back to the days of Dodgeball, we decided not to build the experience for devices like Windows mobile phones or smartphones, because the reality was that not that many people were carrying those phones. Despite the fact that it was a bigger challenge to build a rich mobile experience on lower-end phones, we focused on SMS because it was something everyone could use and because we felt strongly that if you’re building something social, it’s not fun if it’s something that most people can’t use. Now, higher-end mobile devices are much more common and are becoming people’s preferred device. Now, even if people are given the choice of having an experience on their laptop or having an experience on their phone, people are starting to choose the experience on their phone because it’s always with them. It’s just as fast. It’s just as nice looking. That just really opens the door for designers and engineers to build great mobile experiences.


    What Mobile design topics interest you the most?





    I’m really interested in
    designing experiences that leverage mobile devices as location-aware sensors.
    There’s something really powerful about the idea that the phones people carry
    with them can act as sensors alerting people about interesting things in their
    environments. Devices can know about the people you’ve been at places with, the
    things you’ve done and shared… even the speed at which you’re moving. That
    opens up the opportunity to build experiences that are even less disruptive
    than the experiences we have now. Now, it’s still very much like, “Let me
    open up Google maps and get directions to go do such and such.”

     

    Granted, this all has to be done with
    the user’s privacy always kept front of mind, and I think the technology is
    finally getting to a point where we can find that balance and design an
    incredibly engaging augmented experience while respecting a user’s privacy.
    Ultimately, I think we’ll settle into some place where people will feel
    comfortable sharing more information than they are now, and I’m interested in
    seeing the kinds of mobile experiences we can create based on that information.

    It seems weird to think that in our
    lifetime, we had computers in our homes that were not connected to a network,
    but I can vividly remember that. But that’s something my daughter will never
    experience. I think a similar change will happen with some of the information
    sharing questions that we have today.

    There’s a weird line, though. Those kinds of experiences
    can get creepy super fast. I think the important thing to remember is that some
    problems are human problems. They’re problems a computer can’t solve. I’m
    definitely not one of those people who says stuff like, “We think phones
    will know what you want to do before you want to do it.” I think there’s a
    real danger to over rely on the algorithm to solve human problems. I think it’s
    finding the right balance of how you can leverage the technology to help
    improve someone’s experience, but not expect that you’re going to
    wholeheartedly hand everything over to a computer to solve. It’s a really
    difficult dance to try and be the technology in between human beings. However,
    no matter how far the technology goes, there’s always going to be that nuance
    that needs to be solved by people.



    Foreword to The Mobile Frontier

    Posted on



    S

    o
    here’s a little fact that feels surprising: Today on our small blue planet,
    more people have access to cell phones than to working plumbing. Think about
    that. Primitive plumbing has been around for over a thousand years. Modern
    working plumbing has been around for at least 200 years longer than the fleeting
    few years since 1984 when Motorola first ripped the phone off the wall and
    allowed us to carry it around. Most people find plumbing useful. Apparently
    many millions more find cellular phones indispensible. 

    Whenever
    a big part of modern life–the Internet, video games, search engines,
    smartphones, iPads, social networking systems, digital wallet payment systems–are
    so useful that we can no longer imagine life without them, we act as if they
    will forever be the way they are now. This childlike instinct has its charms,
    but it is always wrong and particularly dangerous for designers. People who
    think deeply about the built world necessarily must view it as fungible, not
    fixed. It is the job of thoughtful designers to notice the petty annoyances
    that accumulate when we use even devices we love; to stand in the future and think
    of ways to make it more elegantly functional, less intrusive, more natural, far
    more compelling. In the best such cases, designers need to surprise us–by
    radically altering what we think is possible. To create the futures we cannot
    even yet imagine.

    But the
    future is a scary place replete with endless options, endless unknowns. Of
    course, like everyone else, designers don’t have a crystal ball. There is a
    constant risk that we will make assumptions which turn out to be either too
    bold or too timid. Designers must rely instead on methods to think through
    which evolutionary and revolutionary shifts are most likely–among an infinite
    array of possibilities.

    In The Mobile Frontier, Rachel Hinman has
    tackled one of the most vital issues in the future of design: how will our lives change while we are on
    the go?
    She has used her vast prior experience in working to shape the
    future for Nokia, then added disciplined methods to do us four vital favors:

    Reveal
    the structures of current and coming mobile interfaces…

    Just
    as cars have gone through several design eras (remember tailfins?), The Mobile Frontier has clarified four
    waves of successive strategies that make a device successively easier and more
    pleasant to use. Whether you are a
    designer, or simply an enthusiast, this is a revelation. It shows how the
    metaphors and strategies for how to use a device evolve as there is more
    processing power, memory, and display capabilities available to make a device
    better behaved.

    Uncover
    patterns in how we behave when we are mobile…

    When you observe people deeply enough you discover
    something fundamental. While there are an infinite number of things people
    theoretically might do with mobile devices, inevitably the real activities we
    choose to do can be distilled into clear patterns with a few themes and
    variations. The Mobile Frontier has made
    these clear, so that the challenge of thinking about mobility becomes vastly
    more interesting, more tractable and far easier to either improve or reinvent.

    Provide
    strategies for designing better mobile experiences…

    Whenever we want to improve
    or reinvent a category there are some methods that are better than others. The Mobile Frontier helps lay out active
    design and prototyping strategies that make the otherwise daunting task of
    building new interface alternatives likely to succeed instead of fail. This allows designers to proceed with
    courage and confidence, knowing they can reliably imagine, develop and test
    alternative interfaces, in order to get the future to show up ahead of its
    regularly scheduled arrival.

    Speculate
    about what will come next…

    Finally, The
    Mobile Frontier
    bravely peers down a foggy windy road to guess what lies
    around the corner. This is a task always doomed to failure in detail, but
    Rachel does a brilliant job of giving us the broad outlines. This is essential for helping us get past
    the trap of merely filigreeing around the edges of the known, to instead
    imagine the breakthroughs still to come.

    Collectively, these four deep insights advance the
    known boundaries of understanding today’s mobile devices and experiences. Thus
    they help usher in the vastly new ones sure to emerge soon. Here’s why that
    matters: we are only three decades into one of the most important revolutions
    the world has ever seen. In design development terms, that is a mere blink. Just
    as the mobile device world has zipped past plumbing like a rocket sled would
    pass a slug, we simply must see ourselves at the very beginning of this
    revolution. With mobile devices, we are today where autocars were when the
    Model T was the hottest thing on wheels. We will see vastly more change than
    most of us can possibly imagine. Through our mobile devices we will find new
    advances in learning, security, community, interaction, understanding,
    commerce, communication and exploration.

    Rachel Hinman is helping us make all that come along
    a little sooner, a lot easier, and far more reliably. See for yourself. Better
    yet, join in. Get a move on. Oh, and
    bring your devices. Let’s make ’em more amazing. 


    Larry Keeley

    President and Co-Founder

    Doblin Inc. 



    Mapping Touchscreens for Touch

    Posted on

    hands.jpg

    Unlike personal computer experiences, which involve many physical
    buttons like keyboard keys and mice with scroll wheels, most mobile touch
    screen experiences involve interactions with nothing more than flat screens of
    glass. While there are few physical buttons, the nature of touch screen
    interactions are highly physical because they are explored through human hands.
    Subsequently, it’s important that touch screen layouts not only offer generous
    touch targets, but also accommodate the ergonomics of fingers and thumbs.

    Smartphones and the “Thumb Zone”

    One of the great things about smartphones is that they’re designed to fit in
    the palm of your hand – often resulting in one-handed use. This means touch
    screen interfaces must not only be aesthetically pleasing, they should be
    organized for the fingers, especially the thumb. It’s the finger that gets the
    workout and the reason why most major interface elements are located at the
    bottom of the screen instead of the top.

    Interfaces designed for the desktop experience typically follow
    the design convention of placing major menu items across the top of the screen.
    The reverse is true of mobile experiences. Major menu items of your mobile
    experience should reside in “the thumb zone” – the area of the screen that is navigable using just a thumb.

    thumb_zone.gif

    What about Tablets?

    While they have many similar characteristics (few physical
    buttons, user mostly interacting with a piece of glass) the ergonomic
    considerations for tablets are quite different than smartphones, mostly because
    one-handed use of a tablet is very difficult. Instead, people use tablets in a variety of
    ergonomic configurations. From curling up with it like a book, to holding it
    like a clipboard, to propping it up in a kitchen while cooking – the variety of
    ways people use tablets make it difficult to recommend a single set of heuristics
    about navigation and content placement.

    Instead, it’s important to consider how mutual reconfiguration of the user’s body and the device occur during tablet use. This
    involves considering the ways a user will likely configure their body when
    using a tablet application and placing the the
    primary navigation elements accordingly. Here are a few examples:

    “Curling Up” Stance

    For tablet experiences that encourage the “curling up” user stance, opt for navigation at the top and consider incorporating horizontal gesture controls.

    Curling_up_1.0.jpg


    “The Clipboard” Stance

    For tablet experiences in which the user will be holding/using the tablet while standing, consider placing the navigation at the top of the screen where it’s easy for the user to see.

    clipboard.jpg

    “The Multi-tasker” Stance

    In tablet experiences where the user will likely be multi-tasking with other objects or devices, their time and attention will be divided. Opt for a “content as the interface” strategy. Try embedding navigation and interaction controls within the content itself and place these controls and navigation in the center portion of the screen.

    multi_tasker.jpg

    What does “Convergence” mean to you?

    Posted on


    Convergence
    is a word that’s floated around the vernacular of the mobile industry for as
    long as I can remember. To be honest, I’m guilty of dismissing it. More times
    than not when people use the term I relegate it to the pile of meaningless
    buzzwords nobody can quite define along with the likes of “synergy”
    and “platform.”

     

    However,
    the frequency with which I hear this word in recent times has become somewhat
    alarming. Leaving me to wonder… when people say “convergence” what do they
    actually mean? What does this word mean to me?

     

    Some Thoughts on Convergence

    When I
    think of convergence, shapeshifting comes to mind. Just like the Wonder Twins
    transforming into “the form of” a convenient animal/water configuration that
    will save the day, convergence is what enables experiences to shapeshift
    between different devices and environments. Instead of being siloed and trapped,
    experiences can move fluidly through multiple devices.

     

    My
    thinking of late is that convergence actually occurs on three levels that are
    separate but interrelated:


    convergence.jpg

     

    Technology convergence is when
    a set of devices contain a similar technology, which enables experiences to
    move across multiple devices. Examples: Wireless Internet or a software
    platform like Android.

     

    Media convergence is when
    content/information is prismed through multiple devices or touchpoints. The
    content and interactions often responds appropriately to the context
    (smartphone vs. big screen TV, etc) – but the focus is on the throughline of
    the content through the ecosystem of devices. Examples: Pandora, Netflix

     

    Activity convergence enables
    user to perform an activity regardless of the device. The key to this type of
    convergence is figuring out how allow users to complete a task or achieve their
    goal in a way that is intuitive given the high degree of variance between types
    of devices and vast number of use contexts. Examples: Email, browsing the
    Internet, looking up a restaurant on Yelp.

     

    When I
    asked some friends at work what convergence meant to them, they referred me to
    the video below.



     

    What does convergence mean to you? 

    Please let me know in the comments below!

     

     

    Animation Principle Eight: Secondary Action

    Posted on

    squirrel.jpg



    Imagine a
    squirrel, running across your lawn. The movement of the squirrel’s spry legs
    (considered the primary action) would be animated to express the light, nimble
    nature of his gate. The agile, undulating movement of the squirrel’s tail –
    considered the secondary action – would have a separate and slightly different
    type of movement than his legs. The squirrel’s tail is an example of secondary
    action – an animation principle that governs movement that supports a primary
    action of an animation sequence without
    distracting from it. Secondary action is applied to
    reinforce the mood or enrich the main action of an animated scene.
    The key to secondary actions is that it should
    emphasize, rather than take attention away from the main action being animated.


    (Caption: The primary action of thie animation is the squirrel’s body and
    legs moving. The shape and character of the squirrel’s tail as it moves is the
    secondary action.The secondary action serves to reinforce the mood and
    character of the primary action and is uesed to make the animation feel more
    realistic.)

    Mobile UX Secondary Action Example

    secondary_action_example.jpg

    (Caption: The transition that occurs when a user clicks on a URL in an
    email, activating the phones browser on an iphone is an example of secondary
    action. The primary action is the browser window emerging forward into user’s
    view. The secondary action is the email view receding into the background. Both
    actions occur simulataneously, but the secondary action of the email application
    supports the primary action – opening a browser window.)

    Secondary Action and Mobile UX

    When used prudently, the subtle incorporation of secondary action can make the animation and transitions within your mobile experiences really sing. Subtlety is the key, though. It’s a natural novice tendency to go a little “nutso” when learning to integrate motion into your work. The principle of secondary action can help you edit your use of motion and prevent your experiences from feeling like a trip to a carnival’s fun house for users. 

    Just remember: 

    Support, not upstage. Secondary action should reinforce the primary action, not detract from it. 

    Subtlety is key. If the secondary action/movement is competing with the primary animation, the motion phrase will feel superfluous or confusing for the user. Think squirrel tail 🙂


    squirrel_tail.jpg


    What examples of secondary action in mobile UX have you seen?

    Animation Principle Seven: Arcs

    Posted on

    sparkler.jpg

    Objects don’t move through space at random.  Instead they move along relatively
    predictable paths that are
    influenced by forces such as thrust, wind resistance, and
    gravity
    .
    The outline of a sparkler on the Fourth of July or skid marks on the pavement
    from a braking car are rare examples of the physical traces of these paths. Usually
    an object’s trajectory is invisible. While these paths lay largely unseen by
    the human eye, patterns exist for trajectory paths based on whether an object
    is organic or mechanical. Objects that are mechanical in nature such as cars,
    bicycles, and trains tend to move along straight trajectories, whereas organic
    objects such as plants, people and animals tend to move along arched
    trajectory. The object you wish to
    animate should reflect these characteristics of movements for greater realism.

    (Caption: An object’s trajectory lies largely
    unseen except in rare occasions, such as the glowing sparks of a lit sparkler
    that traces the path of where it’s been.)

    When integrating motion into a mobile experience, it’s
    important to consider whether the object being animated should reflect organic
    or mechanical qualities. If the object possesses organic qualities, the arc
    animation principle suggests the object should move along an arched trajectory.
    An object that is mechanical in nature would move along straight or angular one.

     

    arcs.jpg

    (Caption: The animaiton used to express the
    motion of elements such as fish and water in the iPhone application Koi Pond move along arched trajectories
    giving the experience an organic feeling. The interface
    elements in an iteration of the Android mobile
    platform tend to move along straight trajectories, giving the UI a mechanical
    feeling.)