Posts Tagged ‘usabilityTesting’

Data-informed design

September 21, 2016

Our philosophy and approach for every design sprint is to be data-informed, not data-driven. We try to surface every piece of information that will help paint a clear picture of the problem we’re trying to solve. We leverage all of the data we can to understand the core problem, but we don’t blindly build whatever the data may suggest.

Data is an extremely valuable tool and it’s critical to the design process. Designing without data is like flying blind, but purely data-driven design is dangerous and can lead to unintentional and uninspired design. Testing 41 different shades of blue may increase your conversion rate slightly, but if your design is flawed to begin with it will never be able to reach it’s full potential. Relentless A/B testing can only take you so far. Maybe your Google Analytics numbers aren’t quite giving you the whole picture.

Ryan Langlois Data-informed design

 

Advertisements

Measuring performance (e.g. task success rate) and satisfaction (e.g. ‘liking a Website’)

June 9, 2014

• Performance and satisfaction scores are strongly correlated, so if you make a design that’s easier to use, people will tend to like it more.

• Performance and satisfaction are different usability metrics, so you should consider both in the design process and measure both if you conduct quantitative usability studies.

JAKOB NIELSEN: User Satisfaction vs. Performance Metrics

User experience metrics

June 9, 2014

Most metrics are marketing oriented, not experience oriented. Unique visitors can tell you whether your marketing campaign worked and social mentions can tell you whether you’ve got a great headline, but these metrics do not reveal much about the experience people have had using a site or application.

(…)

User experience is about more than just ease of use, of course. It is about motivations, attitudes, expectations, behavioral patterns, and constraints. It is about the types of interactions people have, how they feel about an experience, and what actions they expect to take. User experience also comprehends more than just the few moments of a single site visit or one-time use of an application; it is about the cross-channel user journey, too. This is new territory for UX metrics. – See more at: http://www.uxmatters.com/mt/archives/2014/06/choosing-the-right-metrics-for-user-experience.php#sthash.zmdVOE0q.dpuf

Three categories of UX metrics: Usability; Engagement, and Conversion

    Usability
    Usability metrics focus on how easily people can accomplish what they’ve set out to do. This category of metrics includes all of the usability metrics that some UX teams are already tracking—such as time on task, task success rate, and an ease-of-use rating. It may also include more granular metrics such as icon recognition or searching versus navigating. Plus, it could include interaction patterns or event streams that show confusion, frustration, or hesitation.

  • Time on task
  • Task success
  • Confusion moment
  • Perceived success
  • Cue recognition
  • Menu/navigation use
    Engagement
    Engagement is the holy grail for many sites and is a notoriously ambiguous category of metrics. But UX teams could make a real contribution to understanding how much people interact with a site or application, how much attention they give to it, how much time they spend in a flow state, and how good they feel about it. Time might still be a factor in engagement metrics, but in combination with other metrics like pageviews, scrolling at certain intervals, or an event stream. Because this metric is tricky to read, it yields better results in combination with qualitative insights.

  • Attention minutes
  • Happiness rating
  • Flow state
  • Total time reading
  • First impression
  • Categories explored
    Conversion
    Conversion is the metric that everyone cares about most, But its use can mean focusing on a small percentage of users who are ready to commit at the expense of other people who are just becoming aware of your site or thinking about increasing their engagement with it. You can use UX metrics to design solutions for these secondary scenarios, too—for example, by looking at users’ likelihood of taking action on micro-conversions, in addition to considering conversion rate and Net Promoter Score (NPS).

    The metrics in this category can help us to spot trends and get past the So what? question that applies to all data. The big metrics give us the big picture, showing how a site or application changes over time and how it lives in the world or the broader context of other experiences.

  • Micro-conversion count
  • Brand attribute
  • Conversion rate
  • Likelihood to recommend, or NPS
  • Trust rating
  • Likelihood to take action

Pamela Pavliscak: Choosing the Right Metrics for User Experience

Usability metrics

April 6, 2014

A website’s usability is determined by the following factors:

  • Efficiency: the speed that users can complete their tasks
  • Effectiveness: the completeness and accuracy with which users achieve their goals
  • Engagement: how satisfied the user is with their experience

ISO 9241

Talking with Participants During a Usability Test

January 27, 2014

3 safe and productive approaches for interrupting or answering users during usability tests and other research studies:

  • Echo – With the echo technique, the facilitator repeats the last phrase or word the user said, while using a slight interrogatory tone.
  • Boomerang – With the boomerang technique, the facilitator formulates a generic, nonthreatening question that she can use to push the user’s question or comment back to him.
  • Colombo – With the Columbo technique, be smart but don’t act that way. (…) One way to do this is to ask just part of a question, and trail off, rather than asking a thorough question.

Kara Pernice: Talking with Participants During a Usability Test

Huge breakthroughs or huge failures

August 6, 2013

Radically new concepts can lead to huge breakthroughs or huge failures, so the courage to pursue a bold concept must be tempered by an appropriate risk management plan, particularly a lot of product testing of the basic design concept.

Victor Lombardi: Why We Fail

Usability metric

October 17, 2012

Usability metrics reveal [measure] something about […] some aspect of

  • effectiveness (being able to complete a task)
  • efficiency (the amount of effort required to complete a task)
  • or satisfaction (the degree to which the user was happy with his or her experience while performing the task)

Some examples: task success, user satisfaction; errors

Tulls and Albert (2008), p.7-8

Comparative Usability Evaluation

September 23, 2012

CUE stands for Comparative Usability Evaluation. In each CUE study, a considerable number of professional usability teams independently and simultaneously evaluate the same website, web application, or Windows program.

The Four Most Important CUE Findings:

  • The number of usability problems in a typical website is often so large that you can’t hope to find more than a fraction of the problems in an ordinary usability test.
  • There’s no measurable difference in the quality of the results produced by usability tests and expert reviews.
  • Six – or even 15 – test participants are nowhere near enough to find 80% of the usability problems. Six test participants will, however, provide sufficient information to drive a useful iterative development process.
  • Even professional usability evaluators make many mistakes in usability test task construction, problem reporting, and recommendations.

In more detail:

CUE-1 to CUE-6 focused mainly on qualitative usability evaluation methods, such as think-aloud testing, expert reviews, and heuristic inspections. CUE-7 focused on usability recommendations. CUE-8 focused on usability measurement.

  • CUE-1 – Four teams usability tested the same Windows program, Task Timer for Windows
  • CUE-2 – Nine teams usability tested http://www.hotmail.com
  • CUE-3 – Twelve Danish teams evaluated http://www.avis.com using expert reviews
  • CUE-4 – Seventeen professional teams evaluated http://www.hotelpenn.com (nine teams with usability testing and eight teams with expert reviews)
  • CUE-5 – Thirteen professional teams evaluated the IKEA PAX Wardrobe planning tool on http://www.ikea-usa.com (six teams with usability testing and seven teams with expert reviews)
  • CUE-6 – Thirteen professional teams evaluated the Enterprise car rental website, http://www.Enterprise.com (10 teams with usability testing, six teams with expert reviews, and three teams with both methods)
  • CUE-7 – Nine professional teams provided recommendations for six nontrivial usability problems from previous CUE-studies
  • CUE-8 – Seventeen professional teams measured key usability parameters for the Budget car rental website, http://www.Budget.com
  • CUE-9 – A number of experienced usability professionals independently observed five usability test videos, reported their observations and then discussed similarities and differences in their observations (the “Evaluator Effect”)

Most important finding from individual CUEs:

  • Realize that there is no foolproof way to identify usability flaws. Usability testing by itself can’t develop a comprehensive list of defects. Use an appropriate mix of methods.
  • Place less focus on finding “all” problems. Realize that the number of usability problems is much larger than you can hope to find in one or even a few tests. Choose smaller sets of features to test iteratively and concentrate on the most important ones.
  • Realize that single tests aren’t comprehensive. They’re still useful, however, and any problems detected in a single professionally conducted test should be corrected.
  • Increase focus on quality and quality assurance. Prevent methodological mistakes in usability testing such as skipping high-priority features, giving hidden clues, or writing usability test reports that aren’t fully usable.
  • Usability testing isn’t the “high-quality gold standard” against which all other methods should be measured. CUE-4 shows that usability testing – just like any other method – overlooks some problems, even critical ones.
  • Expert reviews with highly experienced practitioners can be quite valuable – and, according to this study, comparable to usability tests in the pattern of problems identified – despite their negative reputation.
  • Focus on productivity instead of quantity. In other words, spend your limited evaluation resources wisely. Many of the teams obtained results that could effectively drive an iterative process in less than 25 person-hours. Teams A and L used 18 and 21 hours, respectively, to find more than half of the key problem issues, but with limited reporting requirements. Teams that used five to ten times as many resources did better, but the additional results in no way justified the considerable extra resources. This, of course, depends on the type of product investigated. For a medical device, for example, the additional resources might be justified.
  • The number of hours used for the evaluations seems to correlate weakly with the number of key issues reported, but there are remarkable exceptions.
  • Expert review teams use fewer resources on the evaluation and in general report fewer key issues, but in general their results are fully acceptable.
  • The teams reported surprisingly few positive issues, and there was no general agreement on them. Many positive issues were reported by single teams only. You might ask whether the PAX Planner is really that bad, or if usability professionals are reluctant to report positive findings.
  • Spell out your recommendation in detail to avoid misunderstanding and ‘creative misinterpretation.’
  • Recommend the least possible change. Tweaking the existing thing is always preferable to starting over. Major changes require major effort, including retesting a lot of ‘stuff.’
  • Be careful when you report minor problems from a usability test. No one else may agree with you that the problem is worth reporting.

DialogDesign

How many participants?

May 9, 2011

‘Over the years, there has been plenty of debate over how many participants are enough for a study. It turns out we were looking in the wrong direction. When you focus on the hours of exposure, the number of participants disappears as an important discussion. We found 2 hours of direct exposure with one participant could be as valuable (if not more valuable) than eight participants at 15-minutes each. The two hours with that one participant, seeing the detailed subtleties and nuances of their interactions with the design, can drive a tremendous amount of actionable value to the team, when done well.’

JARED M. SPOOL: Fast Path to a Great UX – Increased Exposure Hours

Biased answers in usability testing

August 1, 2010

Self-reported data is typically three steps removed from the truth:

  • In answering questions (particularly in a focus group), people bend the truth to be closer to what they think you want to hear or what’s socially acceptable.
  • In telling you what they do, people are really telling you what they remember doing. Human memory is very fallible, especially regarding the small details that are crucial for interface design. Users cannot remember some details at all, such as interface elements that they didn’t see.
  • In reporting what they do remember, people rationalize their behavior. Countless times I have heard statements like “I would have seen the button if it had been bigger.” Maybe. All we know is that the user didn’t see the button.

Jakob Nielsen (2001): First Rule of Usability? Don’t Listen to Users