Usability testing: Difference between revisions
m (→Links) |
m (→Links) |
||
Line 32: | Line 32: | ||
For more methods, see: [http://www.usabilityhome.com/ Usability Evaluation Methods - Testing] at usabilityhome.com | For more methods, see: [http://www.usabilityhome.com/ Usability Evaluation Methods - Testing] at usabilityhome.com | ||
== Observer effects == | |||
According to [http://www.userfocus.co.uk/articles/the-observer-effect-in-usability-testing.html A little known factor that could have a big effect on your next usability test] (retrieved March 2014), the main observer effect in usability testing is that '''experts only seem to spot about 50% of usability problems''' when analysing usability test videos ! | |||
== Links == | == Links == |
Revision as of 12:06, 3 March 2014
<pageby nominor="false" comments="false"/>
Introduction
Usability tests - also called user tests, diagnostic evaluation - measure whether users can get a task done, e.g. finding information, signing up, buying something. Such tests are conducted with a smaller set of representative users, i.e. for an e-learning platform testing, one would select students and teachers.
Typically, participants will have a solve a few tasks in a session, e.g. 6 tasks in an hour.
According to usabilityfirst.com, “tasks should represent the most common user goals (e.g. recovering a lost password) and/or the most important conversion goals from the website or application owner’s perspective. [..] On a website or web application, a conversion is any action taken by a user that satisfies the website owner’s business goals. Common examples include signing up for an email newsletter, making a purchase, or viewing an important web page.” (retrieved March 13 2011).
User actions are recorded in various ways, e.g. an expert may observe user action and enter summary data (such as "missed") into an application. In more sophisticated setups, users are videotaped from two angles, screen action is recorded, etc.
Method
Synopsis of a low cost test
- Sit next to the participant and read out a task
- Do not help the participant, just observe and give some non-committal feedback like "go on" or "thank you".
- If you don't work with a real usability lab setup (video-taping, observers, etc.) then write down important events, i.e. critical incidents and successes.
Thinking aloud
- In order to learn something about the user's mental model and decision making processes, ask the user to "think aloud" what he is thinking/doing.
- This requires screen action recording with audio-taping.
Other methods
For more methods, see: Usability Evaluation Methods - Testing at usabilityhome.com
Observer effects
According to A little known factor that could have a big effect on your next usability test (retrieved March 2014), the main observer effect in usability testing is that experts only seem to spot about 50% of usability problems when analysing usability test videos !
Links
- Introductions and tutorials
- Usability Testing. A short introduction at usabilityfirst.com
- Diagnostic evaluation at usabilitynet.net.
- My place or yours? How to decide where to run your next usability test by David Travis, May 6, 2013. Quote: “The most common types of usability test are remote usability tests, corporate lab-based tests, contextual usability tests and rented facility tests. What are the relative strengths and weaknesses of these different approaches to usability testing and how should you choose between them?”
- Standards
- ANSI/INCITS-354 Common Industry Format (CIF) for Usability Test Reports
- Common Industry Format - Usability Reporting Elements. This is a sort of lookup / example of all elements that would enter a full usability report in CIF format.
- Bibliographies
For popular standard works, see the essential Interaction design, user experience and usability bibliography.
- Jarrett, Caroline, Better Reports: How To Communicate The Results Of Usability Testing Proceedings of STC 51st Annual Conference, Society for Technical Communication, Baltimore, MD., May 9-12, 2004
- Butler, K.; Wichansky, A.; Laskowski, S. J.; Morse, E. L.; Scholtz, J. C., The Common Industry Format: A Way for Vendors and Customers to Talk About Software Usability Computer-Human Interaction Conference September 8-12, 2003 , Bath, England - September 01, 2003
- Jacobsen, N. E., Hertzum, M., & John, B. E. (1998). The evaluator effect in usability studies: Problem detection and severity judgments (opens in a new window), Proceedings of the Human Factors and Ergonomics Society 42nd Annual Meeting (pp. 1336-1340). Santa Monica, CA: HFES.
- Hertzum, M., Jacobsen, N. E. & Molich, R. (2014). What You Get Is What You See: Revisiting the Evaluator Effect in Usability Tests, Behaviour & Information Technology, 33:2, pp. 143-161.