Doing user testing

  1. Determine the goals and explore the questions
  2. Choose the paradigm and techniques
  3. Identify the practical issues: Design typical tasksIdentify the practical issues: Select typical users
  4. Identify the practical issues: Prepare the testing conditions
  5. Identify the practical issues: Plan how top run the tests
  6. Deal with ethical issues
  7. Evaluate, analyze, and present the data

From: Preece, J., Rogers, Y., Sharp, H. (2002), Interaction Design: Beyond Human-Computer Interaction, New York: Wiley

Observing users

  1. Observation in usability testing tends to be objective, from the
    outside. The observer watches and analyzes what happens.
  2. In contrast, in participant observation the evaluator works with
    users to understand their activities, beliefs and feelings within
    the context in which the technology is used.
  3. Ethnography uses a set of techniques that include participant
    observation and interviews. Ethnographers immerse themselves in
    the culture that they study.
  4. The way that observational data is collected and analyzed depends
    on the paradigm in which it is used: quick and dirty, user
    testing, or field studies.
  5. Combinations of video, audio and paper records, data logging, and
    diaries can be used to collect observation data
  6. In participant observation, collections of comments, incidents,
    and artifacts are made during the observation period. Evaluators
    are advised to discuss and summarize their findings as soon after
    the observation session as possible.
  7. Analyzing video and data logs can be difficult because of the
    sheer volume of data. It is important to have clearly specified
    questions to guide the process and also access to appropriate
    tools.
  8. Evaluators often flag events in real time and return to examine
    them in more detail Identifying key. events is an effective
    approach. Fine-grained analyses can be very time-consuming.

From: Preece, J., Rogers, Y., Sharp, H. (2002), Interaction Design: Beyond Human-Computer Interaction, New York: Wiley, p.386

DECIDE: A framework to guide evaluation

    D E C I D E: A framework to guide evaluation

  1. Determine the goals
  2. Explore the questions
  3. Choose the evaluation paradigm and techniques
  4. Identify the practical issues
  5. Decide how to deal with the ethical issues
  6. Evaluate, interpret, and present the data

From: Preece, J., Rogers, Y., Sharp, H. (2002), Interaction Design: Beyond Human-Computer Interaction, New York: Wiley

User testing – 5 reasons!

    Five reasons to invest in user testing (according to B Tognazzini):

  1. Problems are fixed before the product is shipped, not after.
  2. The team can concentrate on real problems, not imaginary ones.
  3. Engineers code instead of debating.
  4. Time to market is sharply reduced.
  5. Finally, upon first release, your sales department has a rock-solid design without having to pepper their pitches with how it will all actually work in release 1.1 or 2.0

From: Preece, J., Rogers, Y., Sharp, H. (2002), Interaction Design: Beyond Human-Computer Interaction, New York: Wiley

Evaluation

  1. Evaluation and design are very closely integrated in
    user-centered design.
  2. Some of the same techniques are used in evaluation as in the
    activity of establishing requirements and identifying users’ needs,
    but they are used differently (e.g., interviews and
    questionnaires, etc.).
  3. Triangulation involves using combinations of techniques in
    concert to get different perspectives or to examine data in
    different ways.
  4. Dealing with constraints, such as gaining access to users or
    accommodating users’ routines, is an important skill for
    evaluators to develop.

From: Preece, J., Rogers, Y., Sharp, H. (2002), Interaction Design: Beyond Human-Computer Interaction, New York: Wiley, p. 337

User-centred approaches to interaction design

  1. Involving users in the design process helps with expectation
    management and feelings ownership, but how and when to involve
    users is a matter of dispute.
  2. Putting a user-centered approach into practice requires much
    information about the users to be gathered and interpreted.
  3. Ethnography is a good method for studying users in their natural
    surroundings.
  4. Representing the information gleaned from an ethnographic study
    so that it can be used in design has been problematic
  5. The goals of ethnography are to study the details, while the
    goals of system design are to produce abstractions; hence they
    are not immediately compatible.
  6. Coherence is a method that provides focus questions to help guide
    the ethnographer towards issues that have proved to be important
    in systems development.
  7. Contextual Design is a method that provides models and techniques
    for gathering contextual data and representing it in a form
    suitable for practical design.
  8. PICTIVE and CARD (collaborative analysis of requirements and
    design) are both participatory design techniques that empower
    users to take an active part in design decisions.

From: Preece, J., Rogers, Y., Sharp, H. (2002), Interaction Design: Beyond Human-Computer Interaction, New York: Wiley, p. 312

Design, prototyping and construction

  1. Prototyping may be low fidelity (such as paper-based) or high fidelity (such as software-based)
  2. High-fidelity prototypes may be vertical or horizontal.
  3. Low-fidelity prototypes are quick and easy to produce and modify and are used in the early stages. There are two aspects to the design activity: conceptual design and physical design.
  4. Conceptual design develops a model of what the product will do and how it will behave, while physical design specifies the details of the design such as screen layout and menu structure.
  5. We have explored three perspectives to help you develop conceptual models: an interaction paradigm point of view, an interaction mode point of view, and a metaphor point of view.
  6. Scenarios and prototypes can be used effectively in conceptual design to explore ideas.
  7. We have discussed four areas of physical design: menu design, icon design, screen design and information display.
  8. There is a wide variety of support tools available to interaction designers.

From: Preece, J., Rogers, Y., Sharp, H. (2002), Interaction Design: Beyond Human-Computer Interaction, New York: Wiley, p.277-8

Hierarchical Task Analysis (HTA)

“Hierarchical Task Analysis (HTA) was originally designed to identify training needs (Annett and Duncan, 1967). It involves breaking a task down into subtasks and then into sub-subtasks and so on. These are then grouped together as plans that specify how the tasks might be performed in an actual situation. HTA focuses on the physical and observable actions that are performed, and includes looking at actions that are not related to software or an interaction device at all. The starting point is a user goal. This is then examined and the main tasks associated with achieving that goal are identified. Where appropriate, these tasks are subdivided into subtasks.”

From: Preece, J., Rogers, Y., Sharp, H. (2002), Interaction Design: Beyond Human-Computer Interaction, New York: Wiley, p.231

Task Analysis

“Task analysis is used mainly to investigate an existing situation, not to envision new systems or devices. It is used to analyze the underlying rationale and purpose of what people are doing: what are they trying to achieve, why are they trying to achieve it, and how are they going about it? The information gleaned from task analysis establishes a foundation of existing practices on which to build new requirements or to design new tasks.
Task analysis is an umbrella term that covers techniques for investigating cognitive processes and physical actions, at a high level of abstraction and in minute detail. ”

From: Preece, J., Rogers, Y., Sharp, H. (2002), Interaction Design: Beyond Human-Computer Interaction, New York: Wiley, p.231