IN PROGRESS – Remote Research: Card Sort Case Study

This is an ongoing project, so this post isn’t complete just yet (but do enjoy what’s here already).

If you’d like to know when the project is complete, drop me a message on the contact page of my site 🙂

Skills used here:

  • New software: Optimal Workshop
  • Remote user research: card sorting, surveys
  • Close collaboration with stakeholders

Background

Whilst at The National Library of Scotland, one of my projects was to help reorganise the Library Collections landing page on the website, in line with what made sense to most people. The website is currently organised into widgets with broad categories that when clicked on, lead you to information about collections that are related to that category.

I wanted to find out many things about the way people were using the Library collections, such as if they knew how to access these resources, if they found the layout useful/not useful and if the way the information was organised generally made sense to them.

After speaking with stakeholders, there wasn’t the capacity to be making large changes to the website layout (and therefore the layout of the collections pages), so the research was instead limited to finding out what categories made sense to people. Despite being aware that the best approach for users would be to conduct research into how they were using the collections and therefore design around their needs, I had to consider what was feasible and realistic, so the research was concentrated on improving the existing page, not altering the information architecture or layout itself, but just the way that collections were categorised.

Screengrab of part of the current collections page

This was a project I was meant to be doing in-house, with a card sorting exercise followed by a focus-group discussion. However, due to the coronavirus pandemic, the Library was shut and this research had to be adapted to go online.

Challenge

Find out how people would categorise collection items, while working remotely and having limited resources to get verbal feedback.

Time

This project is spanning over 4 weeks, from planning to the final results.

Goals

  1. Re-organise the landing page on the Library website, in order to create categories that allowed people to efficiently find the collections most useful to them
  2. Assess remote user research methods in order to find a way to gather the research requirements
  3. Get feedback from users to inform future redevelopment of the collections page IA

Obstacles

  • Conducting user research entirely remotely
  • Recruiting participants in the middle of a pandemic, when not being able to rely on physical advertisements that have worked well in the past (e.g. leaflets/flyers/volunteers)
  • Being able to follow up participant’s responses to the activity with questions when having to conduct this remotely

Planning

The planning document I had originally designed had factors that were completely unusable now that this research was being done remotely. I tried to assess if there was a way that the card sorting could be done online, and what this might mean for the insights that we were hoping to receive. I discovered Optimal Workshop, which had the software ‘OptimalSort’ that would allow for the card sorting to be done online.

I met with stakeholders to assess the feasibility of doing this, and we decided on the cards. Close collaboration with stakeholders at this point was the most important thing that I did during this project, as the cards that were chosen were agreed on by them to give the most useful answers for their questions, while my input helped to give flexibility around their ideas to give some control back to the users. This was important in order to allow for a project where findings would get implemented, but that wasn’t just affirming stakeholder assumptions and did value the input of users.

We planned to do the card sort remotely, followed by a survey sent out to those users after we had time to digest the results. This would mimic the focus group element that we had wanted to do in the first instance, as we would get feedback on our proposed categories and listen to suggestions. Participants would be sent a £10 incentive as a thank you on completion of the sort and the survey.

Test-runs

Once the cards were agreed upon and the instructions were set, I wanted to learn from my last project (Data Foundry Case Study), and do a test run for the card sorting before the project went live. This was to prevent a low rate of responses, and, especially as this was now a remote exercise, to ensure that people knew exactly what they were supposed to be doing and looking at. The Library collections are so diverse, that this was really important as nobody would be there to explain to users what an item was if they didn’t know.

I tested the card sort in my immediate work team (Digital Access) – some knew about the card sorting, and some didn’t know much at all. This was only 5 people, but they had some useful feedback about instructions and the cards and descriptions that helped to refine these.

After this, I rolled out the card sort to all Library staff as a test run. This gained 63 completions and tons of useful feedback to again refine the instructions. This was invaluable to ensure that users received a version of the sort that was as easy as possible to understand and use based on the instructions, and I’m so glad we did this test run. As a welcome side-effect, it gained lots of interest, was featured in a staff update by the CEO of the organisation, and I was asked to write articles about it for both the Digital newsletter and the National Librarian’s report to the board. This helped to elevate the status of UX and staff felt really involved in a Library project.

Also, since we had done the smaller test in my team, all changes to the cards were done then, so we ended up being able to do a direct comparison of staff results to what would go live in the user test. This would be really interesting to compare!

Recruitment

We wanted to reach a wide range of people for this exercise. The focus wasn’t on representative users, but varied users, and we wanted around 30 people to take part for a good amount of manageable results. Within this, we thought around 5 people from each age group (18-25, 26-35, 36-45, 46-55, 56-65, 65 and over). We wanted a variety of sexes, abilities and interests within this, so for that reason I created a screening survey so that we could filter people if we had high interest.

We created a webpage (nls.uk/usability) with the full details of the testing to be transparent with users about the process and what we were trying to achieve. This would contain the link to the survey, so that participants had chance to fully understand the research before completing the screening survey to sign up.

We advertised for the testing in the Library’s e-newsletter, and when this seemed to be targeting users who were largely 40+, we expanded adverts on social media (Twitter, LinkedIn, Facebook). This resulted in 93 interested people, and we selected 6 from each age group, with a complete variety within this. This was to be able to compensate for any fallout between the card sorting and the survey completion. We planned to have two weeks between the closing of the card sort and sending out the follow up survey.

Results

More to come here soon, so keep checking back 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.