-
-
-
-
Structured user testing identifies how ordinary people actually use a product. The goal is to uncover usability issues and improve the user experience. The moderator assigns pre-prepared tasks to respondents, simulating common situations that users face. During these tests, data on user reactions and errors are collected, helping to objectively evaluate the intuitiveness and effectiveness of the product. To obtain relevant results, 3-5 respondents are sufficient.
The output of structured testing includes quantifiable results, which form the basis for design improvement recommendations, increase user satisfaction, and support product success in the market. Structured testing can be conducted in a laboratory (a quiet room without distractions, with good lighting and appropriate recording equipment) or in the field – on the street, at the respondent's location, or in another suitable public space. If testing software, it can also be done remotely via platforms like MS Teams, Zoom, or similar. It is recommended to record each test, whether using a camera, mobile phone, or – in the case of remote testing – by enabling the Record function. Before recording, remember to inform respondents of this fact and obtain their consent.
Implementation Steps of the Method:
- Define Goals and Prepare the Scenario: Begin by clearly defining what you want to achieve through testing. Based on these goals, prepare a scenario that includes a series of tasks set in a real context. This will help participants easily relate to the situations in which they would use your product or service.
- Select a Moderator and Observer: Choose a person to lead the testing and ask questions. Also, select an accompanying observer who will take notes in addition to observing.
- Recruit Participants: Gather a group of participants who represent your target audience. Ensure to include both inexperienced and experienced users.
- Conduct Testing with Each Participant Individually: Read the scenarios to them one by one and let them perform the assigned tasks. Encourage them to verbalise what they are doing and thinking – you need to not only see how they complete the tasks but also understand their thought processes. If necessary, ask them questions to clarify why they made certain decisions, etc.
- Debrief and Analyze: After completing the testing, conduct a debriefing with all involved – participants and observers. Identify which tasks were problematic and why. Based on the gathered information, determine key areas for improvement and plan necessary adjustments to enhance the product.
Sample Test Scenario:
Imagine you are sitting comfortably on the couch in your living room and you want to change the channel, adjust the volume, or search for content without having to manually navigate the menu.
Tasks:
- Turn on the TV using a voice command.
- Change to a different channel using a voice command.
- Increase the volume in the same manner.
- Use your voice to find out what is coming up next after the program you are currently watching (on the same channel).
Tips:
- If you want to find as many errors as possible and you only have 6 respondents, it is more advantageous to conduct 2 rounds with 3 respondents each rather than 1 round with 6 respondents. Six people will essentially find the same issues as three. By fixing the errors after the first round of testing and conducting a second round, you can find smaller errors that would have been overlooked in a single large test due to more significant problems.
- When formulating tasks, try to avoid direct leading questions or formulations that might guide participants toward a solution. Additionally, resist the temptation to help participants when they encounter difficulties, even if they directly ask you what to do next.
Possible uses
- Understanding how people actually use the product
- Confirming or disproving a hypothesis about how people use the product in a specific situation; for example, the hypothesis: People will be able to use the new feature on the TV remote without training, making their lives easier
- Verifying the usability of a wireframe, model, or prototype (early development phase)
- Verifying the usability of the product before market launch (final development phase)
Structured user testing identifies how ordinary people actually use a product. The goal is to uncover usability issues and improve the user experience. The moderator assigns pre-prepared tasks to respondents, simulating common situations that users face. During these tests, data on user reactions and errors are collected, helping to objectively evaluate the intuitiveness and effectiveness of the product. To obtain relevant results, 3-5 respondents are sufficient.
The output of structured testing includes quantifiable results, which form the basis for design improvement recommendations, increase user satisfaction, and support product success in the market. Structured testing can be conducted in a laboratory (a quiet room without distractions, with good lighting and appropriate recording equipment) or in the field – on the street, at the respondent's location, or in another suitable public space. If testing software, it can also be done remotely via platforms like MS Teams, Zoom, or similar. It is recommended to record each test, whether using a camera, mobile phone, or – in the case of remote testing – by enabling the Record function. Before recording, remember to inform respondents of this fact and obtain their consent.
Implementation Steps of the Method:
- Define Goals and Prepare the Scenario: Begin by clearly defining what you want to achieve through testing. Based on these goals, prepare a scenario that includes a series of tasks set in a real context. This will help participants easily relate to the situations in which they would use your product or service.
- Select a Moderator and Observer: Choose a person to lead the testing and ask questions. Also, select an accompanying observer who will take notes in addition to observing.
- Recruit Participants: Gather a group of participants who represent your target audience. Ensure to include both inexperienced and experienced users.
- Conduct Testing with Each Participant Individually: Read the scenarios to them one by one and let them perform the assigned tasks. Encourage them to verbalise what they are doing and thinking – you need to not only see how they complete the tasks but also understand their thought processes. If necessary, ask them questions to clarify why they made certain decisions, etc.
- Debrief and Analyze: After completing the testing, conduct a debriefing with all involved – participants and observers. Identify which tasks were problematic and why. Based on the gathered information, determine key areas for improvement and plan necessary adjustments to enhance the product.
Sample Test Scenario:
Imagine you are sitting comfortably on the couch in your living room and you want to change the channel, adjust the volume, or search for content without having to manually navigate the menu.
Tasks:
- Turn on the TV using a voice command.
- Change to a different channel using a voice command.
- Increase the volume in the same manner.
- Use your voice to find out what is coming up next after the program you are currently watching (on the same channel).
Tips:
- If you want to find as many errors as possible and you only have 6 respondents, it is more advantageous to conduct 2 rounds with 3 respondents each rather than 1 round with 6 respondents. Six people will essentially find the same issues as three. By fixing the errors after the first round of testing and conducting a second round, you can find smaller errors that would have been overlooked in a single large test due to more significant problems.
- When formulating tasks, try to avoid direct leading questions or formulations that might guide participants toward a solution. Additionally, resist the temptation to help participants when they encounter difficulties, even if they directly ask you what to do next.
Possible uses
- Understanding how people actually use the product
- Confirming or disproving a hypothesis about how people use the product in a specific situation; for example, the hypothesis: People will be able to use the new feature on the TV remote without training, making their lives easier
- Verifying the usability of a wireframe, model, or prototype (early development phase)
- Verifying the usability of the product before market launch (final development phase)