Skills used here:
- User research – usability testing and user interviews
- Planning a testing cycle
- Presenting to stakeholders
- Workshops – planning and facilitation
This was my first project as Usability Intern at the National Library of Scotland. I was presented with the ‘Library Search’ service – the catalogue that allows readers to reserve and collect books for reference in the Reading Rooms.
The software for ‘Library Search’ had been live a year, and there were complaints from some readers about not being able to find what they were looking for or search in the new system. These complaints were submitted as part of optional ‘Leave us Feedback’ forms available in the Reading Rooms, and from email responses. This wasn’t seen as enough evidence to make drastic changes to ‘Library Search’, and while there was agreement that readers should be consulted for feedback, there was limited knowledge of the best way to do this or how to organise information or feedback given.
The ‘Sandbox’ is a prototype facility for ‘Library Search’ that allows for changes to be made on the existing search database without this affecting the live model. The Library Search Coordination Group were in the processing of asking staff about a change to the way that the search bar was formatted that they were trialling on the ‘sandbox’, and I used this as an opportunity to begin usability testing on the service. I sent out a document requesting feedback from staff as planned, but then asked to supplement this by conducting usability testing in the Reading Rooms of the Library to get feedback from readers in a way that had never been done before at the Library.
Integrating user testing in a way that is manageable for a service that is already live.
10 hours (2 hrs planning, 3 hrs testing, 2 hours workshop prep, 3 hrs workshop) every 3 weeks.
1. Assess the usefulness of user testing for a service that is already live.
2. Involve all relevant departments in service design decisions.
3. Gain insightful user feedback about ‘Library Search’.
4. Get feedback about the search box format.
5. Improve ‘Library Search’ in line with user expectations.
- Showing the value of user testing
- Creating a model of testing that is able to be maintained
- Managing feedback for a service that has never been tested with users
I took the staff feedback that I had received, and I made a list of some of the recurring comments. From this, I created a task script for 1-2-1 usability testing based around 3 key questions that would show how readers found material based on the existing search box model.
I consulted stakeholders about the task questions and explained why I had chosen them. I finalised a plan that included an introduction, methodology and appendices that included the script itself and asked for feedback on the proposal before going ahead.
I decided to conduct ‘guerrilla testing’ by asking people collecting books in the Reading Rooms to take part in usability testing. I obtained full consent from these readers and they were offered a £20 Amazon voucher in exchange for up to 20 mins of their time.
I booked out the room adjacent to the Reading Rooms to conduct the testing, and recorded the screen and voice of the participants as they took part. I chose 3 people to participate on my first day of testing, and in 3 hours, I had completed all 3 tests.
I planned to do testing in regular phases that didn’t take too much time. Guerrilla testing suited this, and because I planned to test so regularly, selecting a truly representative sample was not necessary. I wanted to gain insights from 3 people, fix the issues that were arising for all 3 of those people, and then carry out a second phase of testing (where new issues would arise since the old ones were fixed!).
Developers were looking to change the search box format, from a selection at the side of the search bar to a drop-down menu. They also wondered if they should change the wording of one of their search scopes, from ‘everything’ to ‘all collections’.
The ‘Library Search’ homepage contained information about what each search scope meant, but feedback from the reading rooms suggested that people didn’t know what these were. To me, this was because the page wasn’t optimised enough or they couldn’t reach the page itself.
I wanted to see how readers interacted with the existing selection option and whether they understood what the search scopes meant to begin with. I created 3 tasks with these overall aims in mind.
The first task question aimed to find out if people could find the ‘Library Search’ homepage and whether they understood and noticed the information on the page:
1. Can you show me how you would request a book from the Reading Rooms if you were at home? You can look for anything you want.
The second task aimed to find out if people could use the ‘Library Collections’ scope easily, as the book was easily available in this search scope, but many many pages down in the default ‘Everything’ search:
2. You want to read a copy of Harry Potter and the Chamber of Secrets in the Library’s Reading Rooms. See if they have this, and then follow the steps you would to request it.
The final task aimed to get people to know and identify the need to switch back to an ‘Everything’ search to locate a journal article available online.
3. You’re researching how recipes and cultural attitudes to Christmas Puddings have changed in the last 100 years. Find an article about ‘Christmas Pudding’ in the British Medical Journal from the 1970s.
Once I had finished the testing, I was certain I wanted a collaborative approach to implementing changes.
I knew I would have to convince some people of the benefits of usability testing, since the concept was so new, so I prepared a video edit of some of the comments from the 3 usability tests I had conducted, and a presentation about the testing. I wanted members to agree to attend workshops where we would watch the videos in full and then discuss the issues that arose, working together to prioritise them.
I did face some resistance from members of the group, who argued that while the insights were great, they were from 3 people only. I acknowledged this, but pointed out that all 3 people had the same issues. In order to exhibit this and reinforce the method that I had chosen, I agreed to do a second set of testing on the ‘Sandbox’ (prototype) model, where I asked another 3 users the same 3 questions as I did on the live model.
I conducted 3 workshops with staff to prioritise the issues. We watched one video in full, and a compilation of comments from other videos.
Staff were asked to make notes during the videos, and then I asked them to note down their top three usability priorities from each phase (live version vs. sandbox), with each issue on a separate post it note. Everyone had 6 issues by the end of the videos.
Very useful workshop which provided a very interesting insight into how users access Library Search and how they find the Library’s resources and website in general. Watching the videos was very helpful, and the post-it exercises helped to illustrate the most common problems readers met, and how our colleagues rated these issues in terms of urgency. The presentation was very well delivered.From a member of staff who attended the workshops.
I collected and grouped the post-its for similarities, and after a discussion around how the issues fit into the red route usability model, I asked staff to vote on their top 3 overall issues from both phases, by using sticky dots.
I compiled the results of the prioritisation exercise in the workshops and then met with developers and the Library Search Coordination Group in order to discuss actions. The results of the exercise were very conclusive – 3 separate workshops identified the same issues time and time again, and the priority votes added together meant that a conclusion was reached based on objectivity (from using the red route model), but also collaboration.
The results proved to developers that small design changes were insignificant when limited research was done into what readers’ goals actually were to begin with. After seeing this, the value of usability testing was truly realised.
Great to see timing and effort being put into an essential part of library services.Feedback from a reader.
The capacity for development was balanced with the new insights from user research, and the original proposed changes were put on hold while larger problems were fixed.
Staff feedback was highly positive about the workshops, and the Coordination Group were keen to continue with testing phases after implementing the new changes and continue gaining insights from readers to better design ‘Library Search’ for their needs.
In future iterations of this testing, I think it would benefit from some larger usability research done before the testing. While I didn’t realise at the time the importance of discovery research since I was so early in my UX development, I think that through developing personas and users journeys, this would provide a convincing base to continue the testing and give a real focus to the audience of the Library. I also felt like I should have communicated throughout the project to developers more. While I tried to involve stakeholders and developers equally, I think I should have involved the developers of the service much more towards the latter stages of the research, to prevent hold-ups in some of the changes being implemented as they would’ve been more aware of what to expect. I can definitely learn from both of these things in future projects!
Refine: Discovery research before testing (persona and journey mapping), communication to developers
Overall, I was really pleased about this project, as I felt like it showed the benefits of usability testing while not taking up loads of time. I think that by showing how easy usability testing is to do with a short amount of time/budget, I convinced stakeholders that this is something that the organisation can and should get onboard with. Some members of staff even started using my workshop model and techniques for their own research, which I felt justified the usefulness of this project.
Repeat: Testing cycle, the workshop format and the way the testing was done
I thought it was important to see how the service performed currently before testing the proposed model, since no previous user research had been done before.
I chose to conduct 1-2-1 task-based testing in order to gain qualitative and quantitative insights. This was really important to me, as I wanted to gain quantifiable evidence for change that the developers wanted (in terms of task success/failure rates), as well as qualitative insights that could be shown to stakeholders to begin to get them empathising with our readers.
People collecting books in the Reading Rooms had to have used Library Search before in order to do so, so I could guarantee relevant participants.
Guerrilla Testing is less prescriptive than traditional usability testing. It suits gaining quick insights when you are less concerned with a representative sample.
There was the assumption that people knew the differences between the search scopes and could navigate this confidently.
My task questions were scenario-based both to relax the reader taking part and to make the task seem like something they would do.
A collaborative approach was important to this project to ensure that everyone felt like they were a part of the developments, and therefore more likely to continue supporting this iterative method of improvement.
The video edit was a great way to convince people who thought we designed with ‘users in mind’ that their assumptions were wrong, and that people struggled with things they had never considered (this therefore presented the importance of doing this kind of testing).
I wanted to show at least 3 videos in full, but due to time constraints, this option was more realistic. I edited out technical problems and general chit-chat, but showed the readers completing the tasks in full.
The red route model is a way to objectively classify problems users face in terms of asking if they happen at a crucial point, if they are persistent and if they are easy to overcome.