Usability Testing on Agile Projects (part 2)


(2 comments)
January 17th 2012


In part 1 of this series, I discussed the benefits of usability testing on enterprise software projects and outlined a general approach for integrating the practice of usability testing into a typical Scrum project (see diagram below). In this post, I’ll lay out a 3-step process for performing a usability test on an enterprise project, describe some recognized best practices, and highlight how it can can hook into the standard elements of Scrum (e.g. burndown, backlog, etc.). Here we go…


scrum_and_usability_testing
(click to expand)

Step 1: Plan for the Test

Preparation is the trickiest part of usability testing. If good users are not found, relevant tasks not defined, or environments not set-up, the results of the test will be specious at best, or non-existent at worst! It is crucial, therefore, to develop the right foundation for test:

Find the Users: For the typical B2C web site, representative users can be found at any street corner or coffee shop. For many enterprise projects, however, users are not so easily located or accessed. Because users of enterprise software may have specific domain knowledge, work in different departments, or reside in different geographic offices, procuring access to them or aligning schedules can be a significant challenge. If possible, it is important to surmount these obstacles – real users will give more accurate feedback, and will be generally more receptive of the system when it is eventually rolled out, becoming advocates of it rather than opponents to it.

When it is not possible to find real users, representative users (e.g. mapping to defined personas), or even non-representative users, can be substituted with some success; many usability issues can be uncovered by users with no domain experience, a fresh set of eyes the most salient requirement.

It is well documented that usability testing requires no more than 4-6 users per round. Usability issues are typically universal and so are found early – there are diminishing returns for testing with any more than 6 users (i.e. they all tell you about the same problems!).


users

Plan the Tasks: Not every feature of an application can or should be usability tested. As noted above, it is important, therefore, to identify for usability testing only those features which are prominent, complex, or have a high cost of user error. Once these features are identified, a script of user tasks can be created. Note that tasks should clear, concise, and worded in a way that is independent of the design of the application (e.g. not “click ‘add item to cart’ and click the ‘purchase’ button” but rather “buy a widget”). Lastly, it is important both to run through the script yourself, and also ask another team member to do the same, and ensure that the script takes no more than 15-30 minutes to execute (with the assumption that a user unfamiliar with the application could take 3x this long).

Write the Post-Test Questionnaire: Depending on the application it may be useful to learn about not only the user’s performance (i.e. can they use the system) but also their preference (i.e. do they like it). Surprisingly, it has been found that the two do not always correlate – users may have problems using the application but say the like it, or use it fine but say they hate it. For this reason, it is important to supplement the observational data (i.e. notes from watching the user) with data from a post-test questionnaire. Note that demographic, qualitative, and quantitative data are important – so ask both open-ended questions like “how did you feel about XYZ?” and “on a scale of 1-5, was the system usable?”. Lastly, the late usability expert Randy Pausch advocated always asking at least one question after a usability test: “what were the two worst things about the system?”, guaranteeing that the user will be forced to shed light on at least two tangible usability flaws (and not try to avoid hurting your feelings).

Set up the Environment, Data, and Access: Finally, a stable and perhaps isolated version of the system must be available for the test participants to use. In many cases this will be the normal QA environment, but depending on the frequency of builds or transiency of the data, this may or may not be adequate. If it is not, alternatives may be either to deploy the system on your local machine or to create a special, temporary environment specifically for usability testing. The feasibility of these options obviously depends on the complexity of the application and dependent sub-systems, serivces, or data stores – a simple single web-app to single database may be easy to configure and deploy, whereas a complex system that leverages LDAP, multiple databases, an ERP, and a handful of external web-services would not be.

Additionally, for some usability test scripts, it will be necessary to have each user test with a fresh, pre-configured, non-confidential set of data (for example, maybe a user is asked to find the user with an outstanding balance, and so such a user must exist). If this is the case, a set of SQL set-up and tear-down scripts may be necessary, or perhaps just a script for manually re-creating the data via the user interface. In any case, it is imperative to understand the data dependencies for the test up-front, to ensure that tests go smoothly, and users find issues related to usability and not bad data conditions.

Lastly, in many cases the test subject will log in as a pre-configured, sample user, however in some cases it will be helpful for the user to log in as himself and see “his” data during the test. In this case, it is important to ensure the user has appropriate access control for the QA environment – and it’s not until he tries to log in that you discover that he doesn’t have authorization.

Step 2: Perform the Test

If planning is done correctly, actually performing the usability test should be a snap – your job is simply to make the subject feel comfortable, stand back, and watch what they do! Here are a few things to keep in mind.

Introduce Yourself and the Test: When meeting the user, it’s important to set a very informal, relaxed, and open atmosphere early, so that the user feels comfortable sharing how they feel about the system. To that end, be sure to mention the following simple points:

After introductions, give them the test script and have them briefly skim the tasks. Convey to them that although the tasks should only take 30-45 minutes, they should feel comfortable taking their time.

Observe, Ask Questions, but Don’t Help: Once the user has begun, it’s important to let them use the system as they wish, even if it means them navigating down false paths (the whole point is to see how they would use it if you weren’t there!). As they complete the tasks, encourage them to talk, and be sure to ask probing, but non-revealing, questions, like “I saw that you clicked the ‘filter’ button but were surprised with the result – what did you expect would happen there?”. If the user is hopelessly stuck, then it may be necessary to give them hints, but otherwise remain an observer.

Take Notes: As the user completes the tasks, take copious notes of their actions, comments, and frustrations – these notes will trigger your memory later and fuel the Usability Observations list that you assemble. Many would argue that it is necessary to also video record usability test sessions, however, the benefit is rarely worth the cost in my opionion (even though it is relatively cheap): project stakeholders seldom take the time to actually view a usability test video session, users may be apprehensive and less honest when being recorded, and the set-up and storage of the recordings can be frustrating and time-consuming.

Step 3: Analyze the Results

Once the tests have been completed, the data must be analyzed and then synthesized into a set of actionable recommendations. To do so, two artifacts can be created, an objective Usability Observations List and a subjective Usability Analysis & Recommendation document.

Create a Usability Observations list: Go back through your notes, and record discrete usability issues that users encountered in a simple spreadsheet. The list should be objectively and tactfully worded, stating, for example, that the user “did not know what the word ‘invoice’ meant” and not “‘invoices’ should be called ‘bills’”. Consensus should be noted for each item (e.g. “3 out 6 users”), and each should be classified either as a “usability bug” (which would later be entered in an issue tracking system to be implemented during the current sprint) or as a “feature request” (which would be added to the product backlog, to be prioritized for a future sprint).

Create a Usability Analysis & Recommendation document: Finally, it will be helpful for project stakeholders to receive a high-level analysis of the usability test, summing up overall impression of the site’s usability (using both quantitative and qualitative data as useful), the areas of the system that were most and least usable, and a set of recommendations for future action (e.g. new features to the backlog, additional rounds of usability testing, etc.). This document may captured in a Wiki, stored in document control (e.g. SharePoint), or just sent via email.

That’s it! Again, I’d like to hear your thoughts on usability testing on enterprise projects. Please share!

I'm an "old" programmer who has been blogging for almost 20 years now. In 2017, I started Highline Solutions, a consulting company that helps with software architecture and full-stack development. I have two degrees from Carnegie Mellon University, one practical (Information and Decision Systems) and one not so much (Philosophy - thesis here). Pittsburgh, PA is my home where I live with my wife and 3 energetic boys.
I recently released a web app called TechRez, a "better resume for tech". The idea is that instead of sending out the same-old static PDF resume that's jam packed with buzz words and spans multiple pages, you can create a TechRez, which is modern, visual, and interactive. Try it out for free!
Got a Comment?
Comments (2)
Radha Tupil
March 16, 2012
Nice post! Our team did exactly the same kind of usability exercise for a mobile product that we developed. Being agile just helped because we ran the tests across two iterations thereby covering major business functionalities of the product. The best part was, we were able to incorporate fixes from the first study, build the product further and take it back to the users for round2. For round 2, we ensured we had a mix of users from round 1 as well as a few first-time users. End of it, the usability feedback really took our product one notch higher.
Ben
March 30, 2012
Thanks Radha! We’ve also found that Usability Testing works well within Agile projects…because as you said the iterative nature makes it easy to quickly address feedback from the usability tests.