Doing user testing

  1. Determine the goals and explore the questions
  2. Choose the paradigm and techniques
  3. Identify the practical issues: Design typical tasksIdentify the practical issues: Select typical users
  4. Identify the practical issues: Prepare the testing conditions
  5. Identify the practical issues: Plan how top run the tests
  6. Deal with ethical issues
  7. Evaluate, analyze, and present the data

From: Preece, J., Rogers, Y., Sharp, H. (2002), Interaction Design: Beyond Human-Computer Interaction, New York: Wiley

Advertisements

Observing users

  1. Observation in usability testing tends to be objective, from the
    outside. The observer watches and analyzes what happens.
  2. In contrast, in participant observation the evaluator works with
    users to understand their activities, beliefs and feelings within
    the context in which the technology is used.
  3. Ethnography uses a set of techniques that include participant
    observation and interviews. Ethnographers immerse themselves in
    the culture that they study.
  4. The way that observational data is collected and analyzed depends
    on the paradigm in which it is used: quick and dirty, user
    testing, or field studies.
  5. Combinations of video, audio and paper records, data logging, and
    diaries can be used to collect observation data
  6. In participant observation, collections of comments, incidents,
    and artifacts are made during the observation period. Evaluators
    are advised to discuss and summarize their findings as soon after
    the observation session as possible.
  7. Analyzing video and data logs can be difficult because of the
    sheer volume of data. It is important to have clearly specified
    questions to guide the process and also access to appropriate
    tools.
  8. Evaluators often flag events in real time and return to examine
    them in more detail Identifying key. events is an effective
    approach. Fine-grained analyses can be very time-consuming.

From: Preece, J., Rogers, Y., Sharp, H. (2002), Interaction Design: Beyond Human-Computer Interaction, New York: Wiley, p.386

DECIDE: A framework to guide evaluation

    D E C I D E: A framework to guide evaluation

  1. Determine the goals
  2. Explore the questions
  3. Choose the evaluation paradigm and techniques
  4. Identify the practical issues
  5. Decide how to deal with the ethical issues
  6. Evaluate, interpret, and present the data

From: Preece, J., Rogers, Y., Sharp, H. (2002), Interaction Design: Beyond Human-Computer Interaction, New York: Wiley

User testing – 5 reasons!

    Five reasons to invest in user testing (according to B Tognazzini):

  1. Problems are fixed before the product is shipped, not after.
  2. The team can concentrate on real problems, not imaginary ones.
  3. Engineers code instead of debating.
  4. Time to market is sharply reduced.
  5. Finally, upon first release, your sales department has a rock-solid design without having to pepper their pitches with how it will all actually work in release 1.1 or 2.0

From: Preece, J., Rogers, Y., Sharp, H. (2002), Interaction Design: Beyond Human-Computer Interaction, New York: Wiley

Evaluation

  1. Evaluation and design are very closely integrated in
    user-centered design.
  2. Some of the same techniques are used in evaluation as in the
    activity of establishing requirements and identifying users’ needs,
    but they are used differently (e.g., interviews and
    questionnaires, etc.).
  3. Triangulation involves using combinations of techniques in
    concert to get different perspectives or to examine data in
    different ways.
  4. Dealing with constraints, such as gaining access to users or
    accommodating users’ routines, is an important skill for
    evaluators to develop.

From: Preece, J., Rogers, Y., Sharp, H. (2002), Interaction Design: Beyond Human-Computer Interaction, New York: Wiley, p. 337

User-centred approaches to interaction design

  1. Involving users in the design process helps with expectation
    management and feelings ownership, but how and when to involve
    users is a matter of dispute.
  2. Putting a user-centered approach into practice requires much
    information about the users to be gathered and interpreted.
  3. Ethnography is a good method for studying users in their natural
    surroundings.
  4. Representing the information gleaned from an ethnographic study
    so that it can be used in design has been problematic
  5. The goals of ethnography are to study the details, while the
    goals of system design are to produce abstractions; hence they
    are not immediately compatible.
  6. Coherence is a method that provides focus questions to help guide
    the ethnographer towards issues that have proved to be important
    in systems development.
  7. Contextual Design is a method that provides models and techniques
    for gathering contextual data and representing it in a form
    suitable for practical design.
  8. PICTIVE and CARD (collaborative analysis of requirements and
    design) are both participatory design techniques that empower
    users to take an active part in design decisions.

From: Preece, J., Rogers, Y., Sharp, H. (2002), Interaction Design: Beyond Human-Computer Interaction, New York: Wiley, p. 312