← Overview

Usability Test

Can be quantitative, based on users’ performance on a given task (e.g., task-completion times, success rates, number of errors) or can reflect participants’ perception of usability (e.g., satisfaction ratings: for example, the percentage of the participants in a study who were able to complete a task).

Or qualitative, that offers a direct assessment of the usability of a system: researchers will observe participants struggle with specific UI elements and infer which aspects of the design are problematic and which work well.


Moderated usability test

Users are brought to the office, with (at least) 2 researchers: one leading the participant(s) through the task(s) and the other one notetaking and observing, to guarantee that the most important insights are well recorded.
Most of the times at Studyportals these tests are done remotely, and the researchers follow them from the distance.

When to use:

When the designs need to be tested or verified, helps diving into issues through additional questions.

How to use:

1. Prepare the test: give scenarios to the users and prepare the task(s) and questions, including possible follow-up questions;

Studyportals Good-practices list:

Prepare Excel notetaking file;
Check vouchers;
Send consent forms and make sure to get them signed before the test takes place;
Send reminders via email to users;
If the test is remote: add users to Skype;
When allowing the company to observe: send Skype for business link;
Align with the product researcher on the script;
Print the script;
Do a pilot test (normally with someone new in the company or who doesn’t know much about our website) to find errors, “bugs” or missing parts;
Do the setup for Flashback (our recording system).

2. Keep the participants focused on the test and thinking aloud.

Advantages:

- More control from the researchers;
- More specific follow-up questions to each user;
- Gives an overall review.

Disadvantages:

- More difficulty in scheduling an appointment with the user;
- More time and money are needed (arrange transport, food…).

Tool(s) used at Studyportals:

Skype, Flashback and Appear.in.

Unmoderated usability test

An automated method that uses a specialized research tool to capture participant behaviours (through software installed on participant computers/browsers) and attitudes (through embedded survey questions), usually by giving participants goals or scenarios to accomplish with a site or prototype.

When to use:

When the main focus of the study is on a few specific elements and the timeframes are tight.

How to use:

1. Prepare the test and the scenarios (the questions should be straight to the point and the instructions clear, since the interviewers cannot interrupt/guide the users during the test.);
2. Install a tool for remote testing, for example “Trymyui”;
3. Add the test with the tasks to the tool to perform the tests. If necessary, predefine follow-up questions to appear after each task, or at the end of the session;
4. To make sure that the script works and to minimize errors, it's important to do a pilot test (with somebody from the office).

Advantages:

Compared to the moderated usability test setup:
- No/less effort in recruiting users;
- No time spend in emailing users, sending instructions and setting up Skype;
- No/less technical issues & setting up the devices/tools;
- No meeting room needed;
- No money needs to be arranged for the participants’ transport, neither for snacks or drinks, although more money is needed to compensate the participants;
- No need of researcher(s) during the test.

Disadvantages:

- Isn't possible to ask detailed questions specific to the users’ actions, and they don’t have real-time support (questions, clarification, technology issues);
- If coming from a panel, participants might know how the UI works and do the tests faster (Check if they remember thinking aloud). They might not represent average users anymore;
- If the participant forgets to think aloud no one is there to remind him/her to do it;
- The researcher(s) can't observe the session live and don’t know how the session went until it’s finished;
- It isn't possible to distinguish if the user really understands the task(s) or not. He/she might try until completing the task(s) and eventually complete them by chance and /or persistence. But the system registers them equally as “task completed”;
- After the test, the participants tend to give an over inflated ease of use score about the difficulty of the task(s).