Posts Tagged ‘usability’

Parallax scrolling

May 9, 2016

“In the age-old debate between beauty and function, parallax scrolling wins the beauty pageant, but fails miserably in terms of function.”

Parallax Scrolling: Attention Getter or Headache? by Jacqueline Kyo Thomas

Lightbox overlay

February 4, 2016

Don’t use an overlay unless you have a clear, compelling case for why this content should not be presented within a regular page. Good reasons for using an overlay could include:

  • The user is about to take an action that has serious consequences and is difficult to reverse.
  • It’s essential to collect a small amount of information before letting users proceed to the next step in a process.
  • The content in the overlay is urgent, and users are more likely to notice it in an overlay.

Kathryn Whitenton in https://www.nngroup.com/articles/overuse-of-overlays/

Cognitive Load

September 10, 2014

The total cognitive load, or amount of mental processing power needed to use your site, affects how easily users find content and complete tasks.

KATHRYN WHITENTON: http://www.nngroup.com/articles/minimize-cognitive-load/

Usability metrics

April 6, 2014

A website’s usability is determined by the following factors:

  • Efficiency: the speed that users can complete their tasks
  • Effectiveness: the completeness and accuracy with which users achieve their goals
  • Engagement: how satisfied the user is with their experience

ISO 9241

Nielsen: 10 most general principles for interaction design

November 24, 2013
  1. Visibility of system status The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
  2. Match between system and the real world The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
  3. User control and freedom Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
  4. Consistency and standards Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
  5. Error prevention
    Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
  6. Recognition rather than recall Minimize the user’s memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
  7. Flexibility and efficiency of use Accelerators — unseen by the novice user — may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
  8. Aesthetic and minimalist design Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
  9. Help users recognize, diagnose, and recover from errors Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
  10. Help and documentation Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.

Number of clicks irrelevant

August 6, 2013

It doesn’t matter how many times I have to click, as long as each click is a mindless, unambiguous choice.

Steve Krug: Don’t make me think

Comparative Usability Evaluation

September 23, 2012

CUE stands for Comparative Usability Evaluation. In each CUE study, a considerable number of professional usability teams independently and simultaneously evaluate the same website, web application, or Windows program.

The Four Most Important CUE Findings:

  • The number of usability problems in a typical website is often so large that you can’t hope to find more than a fraction of the problems in an ordinary usability test.
  • There’s no measurable difference in the quality of the results produced by usability tests and expert reviews.
  • Six – or even 15 – test participants are nowhere near enough to find 80% of the usability problems. Six test participants will, however, provide sufficient information to drive a useful iterative development process.
  • Even professional usability evaluators make many mistakes in usability test task construction, problem reporting, and recommendations.

In more detail:

CUE-1 to CUE-6 focused mainly on qualitative usability evaluation methods, such as think-aloud testing, expert reviews, and heuristic inspections. CUE-7 focused on usability recommendations. CUE-8 focused on usability measurement.

  • CUE-1 – Four teams usability tested the same Windows program, Task Timer for Windows
  • CUE-2 – Nine teams usability tested http://www.hotmail.com
  • CUE-3 – Twelve Danish teams evaluated http://www.avis.com using expert reviews
  • CUE-4 – Seventeen professional teams evaluated http://www.hotelpenn.com (nine teams with usability testing and eight teams with expert reviews)
  • CUE-5 – Thirteen professional teams evaluated the IKEA PAX Wardrobe planning tool on http://www.ikea-usa.com (six teams with usability testing and seven teams with expert reviews)
  • CUE-6 – Thirteen professional teams evaluated the Enterprise car rental website, http://www.Enterprise.com (10 teams with usability testing, six teams with expert reviews, and three teams with both methods)
  • CUE-7 – Nine professional teams provided recommendations for six nontrivial usability problems from previous CUE-studies
  • CUE-8 – Seventeen professional teams measured key usability parameters for the Budget car rental website, http://www.Budget.com
  • CUE-9 – A number of experienced usability professionals independently observed five usability test videos, reported their observations and then discussed similarities and differences in their observations (the “Evaluator Effect”)

Most important finding from individual CUEs:

  • Realize that there is no foolproof way to identify usability flaws. Usability testing by itself can’t develop a comprehensive list of defects. Use an appropriate mix of methods.
  • Place less focus on finding “all” problems. Realize that the number of usability problems is much larger than you can hope to find in one or even a few tests. Choose smaller sets of features to test iteratively and concentrate on the most important ones.
  • Realize that single tests aren’t comprehensive. They’re still useful, however, and any problems detected in a single professionally conducted test should be corrected.
  • Increase focus on quality and quality assurance. Prevent methodological mistakes in usability testing such as skipping high-priority features, giving hidden clues, or writing usability test reports that aren’t fully usable.
  • Usability testing isn’t the “high-quality gold standard” against which all other methods should be measured. CUE-4 shows that usability testing – just like any other method – overlooks some problems, even critical ones.
  • Expert reviews with highly experienced practitioners can be quite valuable – and, according to this study, comparable to usability tests in the pattern of problems identified – despite their negative reputation.
  • Focus on productivity instead of quantity. In other words, spend your limited evaluation resources wisely. Many of the teams obtained results that could effectively drive an iterative process in less than 25 person-hours. Teams A and L used 18 and 21 hours, respectively, to find more than half of the key problem issues, but with limited reporting requirements. Teams that used five to ten times as many resources did better, but the additional results in no way justified the considerable extra resources. This, of course, depends on the type of product investigated. For a medical device, for example, the additional resources might be justified.
  • The number of hours used for the evaluations seems to correlate weakly with the number of key issues reported, but there are remarkable exceptions.
  • Expert review teams use fewer resources on the evaluation and in general report fewer key issues, but in general their results are fully acceptable.
  • The teams reported surprisingly few positive issues, and there was no general agreement on them. Many positive issues were reported by single teams only. You might ask whether the PAX Planner is really that bad, or if usability professionals are reluctant to report positive findings.
  • Spell out your recommendation in detail to avoid misunderstanding and ‘creative misinterpretation.’
  • Recommend the least possible change. Tweaking the existing thing is always preferable to starting over. Major changes require major effort, including retesting a lot of ‘stuff.’
  • Be careful when you report minor problems from a usability test. No one else may agree with you that the problem is worth reporting.

DialogDesign

Reduce cognitive strain

August 18, 2012

The general principle is that anything you can do to reduce cognitive strain will help, so you should first maximise legibility … If you care about being thought credible and intelligent, do not use complex language where simpler language will do … In addition to making your message simple, try to make it memorable. Put your ideas in verse if you can; they will be more likely to be taken as truth.

Kahneman 2012, p.62-3

Mobile usability

February 13, 2012

Mobile sites have higher measured usability than desktop sites when used on a phone, but mobile apps score even higher.

Jakob Nielsen

Interface chrome

January 30, 2012

“Chrome” is the user interface overhead that surrounds user data and web page content. Although chrome obesity can eat half of the available pixels, a reasonable amount enhances usability.

J.Nielsen’s Alertbox