When we’re working in sprints, we need results quickly
In an agile sprint where time is short, we need to get to the analysis quickly. Rapidly and economically launching a user experience test is only part of the equation. Reviewing and analyzing the results, and decision-making in a timely manner is the other half. If we launched a test with 30-minute sessions of 10 participants, we have 5 hours of video to watch, analyze, make annotations, and pull insights so as to enable product and UI optimization. 100 participants would imply 50 hours. This time adds up quickly. We use a testing platform that enable us to quickly review participant sessions, leverage searchable, time stamped & hyperlinked audio transcriptions to locate the most interesting actions and comments, add & share hyperlinked and shareable annotations, and create & share highlight reels
Resist the urge to help quickly
Product developers may eagerly want to help users to use and like their products. But that would defeat the purpose of the usability testing. Our usability test moderators follow the best practice below.
We remain neutral – we are there to listen and watch. If the participant asks a question, we reply with “What do you think?” or “I am interested in what you would do.” We do not jump in and help participants immediately and do not lead the participant. If the participant gives up and asks for help, we must decide whether to end the scenario, give a hint, or give more substantial help.
If you are not measuring, you are not managing
Learning from how our users engage with our product is one way to ensure a positive customer experience. It’s also important, however, to see how our product compares to competitors through benchmarking. Benchmarking our prototype designs against each other, against existing production assets, and against our competition will allow us to identify additional opportunity areas where we can improve our usability design to create a superior customer experience. The best way to do this is using pre-formatted system usability scale questions with automatic calculations of the resulting score. We use comparison metrics such as net promoter score, time on task, and success/failure, which allow us to quantitatively measure our usability and user experience against different design iterations and against the competition, or best practice websites and apps.