top of page
  • Hollie

Week 9: Prototyping and Usability Testing (Part 2)

Updated: Apr 23, 2023

1. Setting the scene


I wanted to add a quick second blog entry this week (written a couple of weeks after my 'Prototyping and Usability Testing (Part 1)' article) to share my reflections on a second and third usability test I carried out with my high-fidelity prototype. I was quite apprehensive about sharing my full prototype with users (and my tutors!), but I'm really pleased with the insights I've been able to gain – both in terms of the improvements I've been able to make to my prototype and potential areas for future iteration and experimentation.


2. Breadth of analysis: exploring key skills and domains - usability testing part 2


2.1 Research aims


Following my language-focused usability test, my tutors recommended that I carry out usability testing on the functionality of my app too, to get a feel for its overall usability. I was keen to explore a different UX research tool this time, so I decided to try out Userbrain to conduct a remote usability test with 3 users.


I wanted to find out how easily participants were able to apply for a specific volunteering role, how they felt about the application process itself, whether they felt the information shown throughout the app was clear, and how they would rate their experience of using the app overall. One of my tutors also fed back on my first prototype that the initial onboarding flow could be seen as a little long, so I specifically wanted to see whether this view was shared by my participants while they completed specific tasks.


2.2. Areas for improvement


Watching the videos of users completing the test was a fascinating experience. Overall, participants' responses were very positive, particularly in relation to ease of use, visual appearance, and the amount of information given, and no users commented that the onboarding experience was too long (I did, however, gain some interesting insights on the complexity of the causes and skills screens – see 3. Depth of insight: enhancing my practice – validating tutor feedback below).


Interestingly, one participant commented that they would use the app in particular when in a new area or city they're unfamiliar with, which gave me food for thought in terms of a potential future iteration designed specifically with travellers in mind.


As a visual design newbie, I was especially pleased with participants' feedback on the visual design, which was described as "very good" and "beautiful" – something I've taken particular care to refine and polish over the last couple of weeks (see my blog entry for Week 8: Visual Design).


The most interesting part of the user testing, however, was seeing the areas where users struggled – which were often areas I simply hadn't been expecting to cause any problems.


For example, Two out of the three participants attempted to interact with the locked badges on the 'Progress' screen and commented that they expected to be able to click on them to see what they would need to do to unlock them:


"It would be helpful [...] to [...] click on a locked one [...] to see what's required to unlock the other badges mentioned here"


"It would be lovely to also have the possibility to click on each badge so I can know what I have to do to earn certain badges"


I therefore removed the orange colour of the locked badges and replaced this with a grey overlay to make it clearer that they hadn't yet been unlocked. So that users wouldn't face a dead-end if they tried to interact with them, I also added overlay modals to each badge, explaining what users would need to do to unlock them (see Fig. 1).

Fig. 1: Improved 'Progress' badges


The user testing, then, was invaluable for gaining insights into improvement areas I hadn't already considered myself. I also discussed this with a colleague at work, and we came to the conclusion that no user testing is wasted, even if it doesn't get you the insights you were necessarily looking for.


3. Depth of insight: enhancing my practice – validating tutor feedback


I also had an extremely productive feedback session with one of my tutors this week on my initial draft PDF case study, as well as my high-fidelity prototype. One area my tutor suggested could be improved further was the layout of the 'causes to help', 'skills to gain', and 'skills to give' pages.


My second round of usability testing had already revealed some problems with these screens. Users were taking quite a long time to find specific skills and causes from the long, uncategorised list presented to them. One user in particular stated that they would feel "anxiety" when interacting with these screens, and would worry that they had missed something important.


As a potential solution, I rearranged both the causes and skills into overarching categories, which I then displayed alphabetically, using the Gestalt principles of similarity and proximity combined with heading hierarchy and white space to create a cleaner, more scannable list for each screen (see Fig. 2):


Fig. 2: Unordered list (left) to list with clearer categories, heading hierarchy, and white space (right)


Despite these changes, my tutor suggested that the screens could still be a little overwhelming, and recommended trying a version that utilised progressive disclosure – for example, a list of drop-down accordions. On this advice (and bearing Hick's Law (Yablonski 2023) in mind), I created a new version where the overarching cause or skill categories are shown as a short list of options. When clicked, each category brings up a bottom sheet overlay with the relevant causes or skills for that category, which can then be individually selected (see Fig. 3):

Fig. 3: Progressive disclosure variant


I then carried out a third usability test (this time taking an A/B format) with 3 more participants (also using Userbrain), comparing the expanded list and progressive disclosure variants.


All 3 participants overwhelmingly preferred the expanded list variant, taking on average just 4 seconds to find the options in the expanded list versus an average of 9 seconds in the progressive disclosure version:


"[The expanded list variant] was much easier to use due to not having to click in and out of the menu categories. It felt more intuitive and easier to navigate."


"Every option was right away visible and it was easy to find the options which were searched [for]."


I've therefore decided to keep the expanded list variant in my prototype until I've tested the progressive disclosure approach with more participants.


4. Conclusions


Conducting my usability tests this week has been so much fun – and provided me with a huge amount of insights into potential improvement areas for my app. The testing I carried out following my tutor's feedback was an especially helpful reminder of the importance of checking our assumptions. Even though logically a progressive disclosure variant could have been an effective way to avoid overwhelming users, in reality, my participants far preferred being able to see their selections and the options available to them all in one go.


4.1. SMART goal: get more involved with usability testing at work


In a 'real-life' project, I would aim to validate these findings with a greater number of users to feel even more confident that I'd taken the right approach. To be able to do this, I want to get more comfortable with carrying out usability testing as part of professional UX projects. I plan on speaking to my UX designer colleagues to arrange an opportunity to partner with them over the next 3 months on an end-to-end usability test, from script creation, through to analysis and presentation of the results to stakeholders.

5. References


YABLONSKI, Jon. 2023. 'Hick's Law'. Laws of UX. Available at: https://lawsofux.com/hicks-law/ [accessed 1 April 2023].

12 views0 comments

Recent Posts

See All

Comments


bottom of page