Four fundamental grips of hands

Four fundamental grips of hands:

  • Power grip
  • Precision grip
  • Hook grip
  • Scissor grip<

From: Bret Victor: A brief rant on The Future of Interaction Design

Advertisements

Gestures in iOS

  • Tap – To select a control or item (analogous to single mouse click)
  • Drag – To scroll or pan (controlled; any direction; slow speed)
  • Flick – To scroll or pan quickly (less controlled; directional; faster speed
  • Swipe – Used in a table-view row to reveal the Delete button
  • Double Tap – To zoom in and center a block of content or an image; To zoom out (if already zoomed in)
  • Pinch Open – To zoom in
  • Pinch Close – To zoom out
  • Touch and Hold – In editable text, to display a magnified view for cursor positioning; also used to cut/copy/paste, and select text.

Ginsburg 2011, p.22

Touch/gestural interactions: lack of discoverability

Nielsen says that some of the iPad’s problems are endemic to the touch tablet format. “With the iPad, it’s very easy to touch in the wrong place, so people can click the wrong thing, but they can’t tell what happened,” he says. There are also problems with gestures such as swiping the screen because they’re “inherently vague”, and “lack discoverability”: there’s no way to tell what a gesture will do at any particular point.

“People don’t know what they can do, and when they try to do something, they don’t even know what they did, because it’s invisible,” Nielsen explains. “With a mouse, you can click the wrong thing, but you can see where you clicked.”

Jack Schofield: Jakob Nielsen critiques the iPad’s usability failings

Design guidelines for developing applications for multitouch workstations

Based on our experiment we recommend the following set of design guidelines for developing applications for multitouch workstations. Since our studies focus on multitarget selection, all of these guidelines are aimed at applications where target selection is the primary task.

  • A one finger direct-touch device delivers a large performance gain over a mouse-based device. For multitarget selection tasks even devices that detect only one point of touch contact can be effective.
  • Support for detecting two fingers will further improve performance, but support for detecting more than two fingers is unnecessary to improve multitarget selection performance.
  • Reserve same-hand multifinger usage for controlling multiple degrees of freedom or disambiguating gestures rather than for independent target selections.
  • Uniformly scaling up interfaces originally designed for desktop workstations for use with large display direct-touch devices is a viable strategy as long as targets are at least the size of a fingertip.

From: Kenrick Kin, Maneesh Agrawala, Tony DeRose ‘Determining the Benefits of Direct-Touch, Bimanual, and Multifinger Input on a Multitouch Workstation

Don Norman: ‘Natural interfaces’ are not natural


Most gestures are neither natural nor easy to learn or remember. Few are innate or readily pre-disposed to rapid and easy learning. Even the simple headshake is puzzling when cultures intermix. Westerners who travel to India experience difficulty in interpreting the Indian head shake, which at first appears to be a diagonal blend of the Western vertical shake for “yes” and the horizontal shake for “no.” Similarly, hand-waving gestures of hello, goodbye, and “come here” are performed differently in different cultures. To see a partial list of the range of gestures used across the world, look up “gestures” and “list of gestures” in Wikipedia.

Gestures will become standardized, either by a formal standards body or simply by convention–for example, the rapid zigzag stroke to indicate crossing out or the upward lift of the hands to indicate more (sound, action, amplitude, etc.). Shaking a device is starting to mean “provide another alternative.” A horizontal wiping motion of the fingers means to go to a new page. Pinching or expanding the placement of two fingers contracts or expands a displayed image Indeed, many of these were present in some of the earliest developments of gestural systems. Note that gestures already incorporate lessons learned from GUI development. Thus, dragging two fingers downward causes the screen image to move upwards, keeping with the customary GUI metaphor that one is moving the viewing window, not the items themselves.
New conventions will be developed. Thus, although it was easy to realize that a flick of the fingers should cause an image to move, the addition of “momentum,” making the motion continue after the flicking action has ceased was not so obvious. (Some recent cell phones have neglected this aspect of the design, much to the distress of users and delight of reviewers, who were quick to point out the deficiency.) Momentum must be coupled with viscous friction, I might add, so that the motion not only moves with a speed governed by the flick and continues afterward, but that it also gradually and smoothly comes to a halt. Getting these parameters tuned just right is today an art; it has to be transformed into a science.

It is also unlikely that complex systems could be controlled solely by body gestures because the subtleties of action are too complex to be handled by actions–it is as if our spoken language consisted solely of verbs. We need ways of specifying scope, range, temporal order, and conditional dependencies. As a result, most complex systems for gesture also provide switches, hand-held devices, gloves, spoken command languages, or even good old-fashioned keyboards to add more specificity and precision to the commands.

Gestural systems are no different from any other form of interaction. They need to follow the basic rules of interaction design, which means well-defined modes of expression, a clear conceptual model of the way they interact with the system, their consequences, and means of navigating unintended consequences. As a result, means of providing feedback, explicit hints as to possible actions, and guides for how they are to be conducted are required. Because gestures are unconstrained, they are apt to be performed in an ambiguous or uninterruptable manner, in which case constructive feedback is required to allow the person to learn the appropriate manner of performance and to understand what was wrong with their action. As with all systems, some undo mechanism will be required in situations where unintended actions or interpretations of gestures create undesirable states. And because gesturing is a natural, automatic behavior, the system has to be tuned to avoid false responses to movements that were not intended to be system inputs. Solving this problem might accidentally cause more misses, movements that were intended to be interpreted, but were not. Neither of these situations is common with keyboard, touchpad, pens, or mouse actions.

From: Don Norman Natural interfaces are not natural