My last post about usability testing talked about set up and guidelines for testing. Following up, here are some outcomes and lessons from a couple testing sessions so far:
  • You can get great feedback. It’d be great to build a product leveraging the experience of the founders in a startup and customers use the product seamlessly. However, founders’ view of the world can be skewed. Usability testing solicits feedback directly from the target audience.
  • Find users who will provide candid feedback. Operative words are “provide” and “candid”. Usability testing can be awkward for testers not used to giving honest feedback. It’s important for the company and customers that users provide critical feedback knowing no answer is wrong – benefits go around for everyone.
  • Bugs can screw it all up. The goal of usability testing is to assess how users interact with the product. Sometimes, you’ll want users to perform specific tasks. However, product bugs can quickly halt testing; thus, preventing constructive feedback. Now, feedback may focus on bugs rather than usability.
  • Early testing can tell you everything. Stop. Iterate. Continue later. That is, early users may find resounding issues during tests. You’ll know it when it happens. In this case, discontinue testing till these headaches are resolved else you get the same feedback from all users. Go again.
  • Watch for unspoken signs. In presentations and demos, I scan faces in the room for emotions and reactions. In usability testing, I do the same. Do they seem delighted or uninterested? Is the user focused? Confused? What’s left unsaid can be most telling.

Usability testing has been a great process for us so far. We identified key gaps, and emerged focusing efforts on select features while reducing clutter. Before, we hypothesized what users wanted. Testing has provided real data and insight.

What outcomes or lessons have you learned from usability testing as a test-giver? Any input as a test-taker? How has usability testing helped your company’s product development and roadmap?
I’m helping run usability tests with a startup; though, I have not formally “run” one in the past. It’s been a great learning experience.
The company’s launching a new site in education, and with the product having been in development for a few months, it’s ready for users to test. For us, we’re soliciting feedback on user experience, shaping our product roadmap, and testing marketing messaging.
I’ve uploaded a Usability Testing Guidelines template I created leveraging others I’ve found with adjustments here: Usability Testing Guidelines. I’ve removed some specific portions, but left in others for illustrative purposes. You’ll want to adjust as you see fit.
(Note: I use “site” here, but substitute the product/ service you’re testing.)
Here are the key components of the Guidelines:
  • Testing – Observer Guidelines. General instructions for the observer when moderating tests.
  • Agenda and Set-Up. Establish the logistics for testing. Ensure the location is ready, devices are set up, and have back-up plans in case things don’t go as planned.
  • Testing Script. Set the guidelines between tester and test-taker including language to encourage honest, impulsive feedback.
  • Usability Tasks. Ensure specific elements of the site are tested. Tasks help navigate testing along timeline.
  • Closing Questions. When in front of target users, it’s advantageous to ask specific and open questions as time allows. These questions help determine the product roadmap, key in on marketing messaging, identify trouble areas not identified before, etc.

Additionally, I like a hybrid approach to testing – unstructured at the beginning, then structured with tasks. With an unstructured format, users go about a site detailing thoughts and driving their interaction as s/he desires. In a structured format, the tester helps navigate the user via tasks and questions.

What lessons do you have from usability tests as a tester or test-taker? How have usability tests been poorly run? Effectively run?