Gestures in iOS

  • Tap – To select a control or item (analogous to single mouse click)
  • Drag – To scroll or pan (controlled; any direction; slow speed)
  • Flick – To scroll or pan quickly (less controlled; directional; faster speed
  • Swipe – Used in a table-view row to reveal the Delete button
  • Double Tap – To zoom in and center a block of content or an image; To zoom out (if already zoomed in)
  • Pinch Open – To zoom in
  • Pinch Close – To zoom out
  • Touch and Hold – In editable text, to display a magnified view for cursor positioning; also used to cut/copy/paste, and select text.

Ginsburg 2011, p.22

Touch/gestural interactions: lack of discoverability

Nielsen says that some of the iPad’s problems are endemic to the touch tablet format. “With the iPad, it’s very easy to touch in the wrong place, so people can click the wrong thing, but they can’t tell what happened,” he says. There are also problems with gestures such as swiping the screen because they’re “inherently vague”, and “lack discoverability”: there’s no way to tell what a gesture will do at any particular point.

“People don’t know what they can do, and when they try to do something, they don’t even know what they did, because it’s invisible,” Nielsen explains. “With a mouse, you can click the wrong thing, but you can see where you clicked.”

Jack Schofield: Jakob Nielsen critiques the iPad’s usability failings

Design guidelines for developing applications for multitouch workstations

Based on our experiment we recommend the following set of design guidelines for developing applications for multitouch workstations. Since our studies focus on multitarget selection, all of these guidelines are aimed at applications where target selection is the primary task.

  • A one finger direct-touch device delivers a large performance gain over a mouse-based device. For multitarget selection tasks even devices that detect only one point of touch contact can be effective.
  • Support for detecting two fingers will further improve performance, but support for detecting more than two fingers is unnecessary to improve multitarget selection performance.
  • Reserve same-hand multifinger usage for controlling multiple degrees of freedom or disambiguating gestures rather than for independent target selections.
  • Uniformly scaling up interfaces originally designed for desktop workstations for use with large display direct-touch devices is a viable strategy as long as targets are at least the size of a fingertip.

From: Kenrick Kin, Maneesh Agrawala, Tony DeRose ‘Determining the Benefits of Direct-Touch, Bimanual, and Multifinger Input on a Multitouch Workstation

Don Norman: ‘Natural interfaces’ are not natural

Most gestures are neither natural nor easy to learn or remember. Few are innate or readily pre-disposed to rapid and easy learning. Even the simple headshake is puzzling when cultures intermix. Westerners who travel to India experience difficulty in interpreting the Indian head shake, which at first appears to be a diagonal blend of the Western vertical shake for “yes” and the horizontal shake for “no.” Similarly, hand-waving gestures of hello, goodbye, and “come here” are performed differently in different cultures. To see a partial list of the range of gestures used across the world, look up “gestures” and “list of gestures” in Wikipedia.

Gestures will become standardized, either by a formal standards body or simply by convention–for example, the rapid zigzag stroke to indicate crossing out or the upward lift of the hands to indicate more (sound, action, amplitude, etc.). Shaking a device is starting to mean “provide another alternative.” A horizontal wiping motion of the fingers means to go to a new page. Pinching or expanding the placement of two fingers contracts or expands a displayed image Indeed, many of these were present in some of the earliest developments of gestural systems. Note that gestures already incorporate lessons learned from GUI development. Thus, dragging two fingers downward causes the screen image to move upwards, keeping with the customary GUI metaphor that one is moving the viewing window, not the items themselves.
New conventions will be developed. Thus, although it was easy to realize that a flick of the fingers should cause an image to move, the addition of “momentum,” making the motion continue after the flicking action has ceased was not so obvious. (Some recent cell phones have neglected this aspect of the design, much to the distress of users and delight of reviewers, who were quick to point out the deficiency.) Momentum must be coupled with viscous friction, I might add, so that the motion not only moves with a speed governed by the flick and continues afterward, but that it also gradually and smoothly comes to a halt. Getting these parameters tuned just right is today an art; it has to be transformed into a science.

It is also unlikely that complex systems could be controlled solely by body gestures because the subtleties of action are too complex to be handled by actions–it is as if our spoken language consisted solely of verbs. We need ways of specifying scope, range, temporal order, and conditional dependencies. As a result, most complex systems for gesture also provide switches, hand-held devices, gloves, spoken command languages, or even good old-fashioned keyboards to add more specificity and precision to the commands.

Gestural systems are no different from any other form of interaction. They need to follow the basic rules of interaction design, which means well-defined modes of expression, a clear conceptual model of the way they interact with the system, their consequences, and means of navigating unintended consequences. As a result, means of providing feedback, explicit hints as to possible actions, and guides for how they are to be conducted are required. Because gestures are unconstrained, they are apt to be performed in an ambiguous or uninterruptable manner, in which case constructive feedback is required to allow the person to learn the appropriate manner of performance and to understand what was wrong with their action. As with all systems, some undo mechanism will be required in situations where unintended actions or interpretations of gestures create undesirable states. And because gesturing is a natural, automatic behavior, the system has to be tuned to avoid false responses to movements that were not intended to be system inputs. Solving this problem might accidentally cause more misses, movements that were intended to be interpreted, but were not. Neither of these situations is common with keyboard, touchpad, pens, or mouse actions.

From: Don Norman Natural interfaces are not natural

XXL screens

Loooooong pages are more than just a design trend. Imaginary page folds and screens designed to match the users monitors appear to be abandonded in favour of more fluid approaches. The homepage of the largest Swedish newspaper Dagens Nyheter is a prime example for a new XXL homepage style. It’s been designed to deliver a get-it-all-on-one-screen experience. Breaking news from the top section of the screen are even replicated further down the line, like a reminder or wrap-up. It makes me thinking how finger-scrolling on ipad & co will remediate the Web experience.

Direct touch

Direct touch bypasses abstraction and creates a strong connection with the touched object. This is particularly true when the object itself triggers associations in our minds. Due to its very size and weight and display area, the iPad triggers powerful associations with:

* Printed documents
* Notepads of paper
* File-folders from the filing cabinet
* Clipboards
* Books

There is something intrinsically “right” about seeing the iPad as a technological successor to, or version of, these physical objects. We’re immediately ready to accept the one as a substitute or enhancement for the other. This is a powerful, and novel, position for the iPad software developer.

Matt Legend Gennel: iPad application design

ipad UX

Differences between ipad and iphone

  1. The display is much larger; 1024×768 pixels. Apps with more demanding presentation requirements will be at home here.
  2. The virtual keyboard is larger, and external physical keyboards are supported via Bluetooth or the dock. Apps which focus on typing are now much more feasible.
  3. The iPhone supports multi-touch, but only the iPad can credibly claim to support two hands. We’ll talk more about this later.


    Two-pane and three-pane interfaces are once again worthy of consideration on this class of device.

  • Master-Detail is feasible and acceptable on iPad.
  • In landscape, both Master and Detail are visible.
  • In portrait, the Master is shown in a transient pop-over.
    Editing/viewing: Look like a viewer, and behave like an editor

  • Hide configuration UI until needed.
  • Edit object properties in place.
  • Attach the editing UI to the object. Show/hide/move as necessary
  • Inspectors should present context-relevant UI.
  • Hide controls which don’t apply to the selection or focus.
  • Modes (do one thin at a time) are preferable to clutter; removing a feature might be preferable to adding a mode for it;
  • Offer only the most-used/needed features. If in any doubt, remove a feature.
  • Discard optional/niche or highly configurable functionality.

  • Dual-handed input is acceptable.
  • Be usable with one hand. Don’t require two hands for essential features.
  • But don’t be afraid to offer time-saving, discoverable dual-handed functionality.

Extracted from Matt Legend Gennel: iPad application design