Desktop Usability Testing

As a mobile provider Sprint wants to ensure their website enables customers to quickly compare mobile phones, mobile plans, coverage maps, and purchase mobile phones using the desktop view of Sprint.com.

Problem Definition

To remain competitive, Sprint.com must continually evaluate the ability of customers to complete tasks, the speed with which customers can complete tasks, and the customers satisfaction during completion of the tasks. The usability test objectives were to measure and compare effectiveness, efficiency, and satisfaction of the Sprint.com and ATT.com websites.

Audience

Thirty two volunteers (15 females and 17 males) participated in the study. Their average age was 32.2 years (SD = 9.1). All participants reported using the Internet at more than once per day. None of the participants had prior experience using the Sprint or AT&T websites for purchasing telephone hardware.

Team / Role

I contributed to all the aspects of testing including: counterbalancing of task sequences, setting evaluation measures, setting quantitative and qualitative evaluation measures, test scripting, participant screeners / recruitment, test scripting and moderation, findings analysis, and recommendations-writing.

Constraints

Our testing team was 100% remote which made excellent asynchronous communication important. Synchronous communication was done through Google Hangouts and use of Google Docs for project planning, data-collection, and analysis / recommendations authoring.

Design Process

Download the full research paper for detailed research methodology and findings.


Once our team had been created we agreed on a day of the week which worked for all of us to meet synchronously to discuss any issues and agree on statements of work. During the first meeting our team agreed on the testing stimuli and approach, providing us a good foundation to begin a Google Doc which became the backbone for the project.

Our project was divided into ten one-week sprints during which we either contributed to a single portion of the project or broke off to attack smaller portions of the project individually.


Structuring the Research

Our team assigned each of us small chunks of the test design so we could work in tandem. One teammate would contribute to an area of the document and then ask the rest of the team to contribute feedback by an agreed-upon review date.

One of the memorable parts I created was the task sequence patterning which was used to counterbalance our participants and avoid learning results. I thought it was really neat to consider the problem of learning during the test and be able to account for it with task sequencing to avoid bias in the analysis phase.

During the creation of the research each of our team members was actively seeking test participants. We selected coworkers and friends who were amenable to participating in a 30 minute test and who were familiar with both computers and the internet and satisfied a number of other criteria so they would not bias our research findings.


The Testing Phase

As our research was highly structured the testing phase was very straight-forward.

Before the participant arrived I set up the testing hardware with Silverback and prepared the web-browser. After some welcome chit-chat and some easy, but research-salient, questions to ease the participant into the testing mindset I handed the participant a booklet indicating the scenarios they would complete.

During the testing I took detailed notes of anything I noticed during their test phase and kept time of each task.

After the testing I asked the participant for some experience notes to understand their mindset and thanked them for their time.


The Analysis Phase

Once a participant completed testing we assigned them a number and entered their anonymized details and results into a spreadsheet in Google Docs. From there we analyzed the quantitative results for patterns and wrote the statistical analysis portion of the paper. Both sites permitted high rates of task-completion but results in both efficiency and satisfaction favored Sprint.com.

The statistical analysis was very straight-forward as it involved calculations which Google Docs handled for us, but the usability findings portion of the article was much more interesting to write. While the statistics portion was cut and dry, the usability findings provided a lens through which readers better understood both the mindset of the testing participants and the reasons for the statistical results.

Findings and recommendations took this a step further and allowed our team to explain how the already-successful Sprint.com site could be made even more efficient.


Retrospective

Our testing was very successful, but our timeline did not leave a lot of room for analysis and writing up our findings. If I were to perform testing again I would be just as thorough with testing, but provide more time for both statistical analysis and context with which to frame the test results.


Chicago Transit Authority Signage Design

Guerrilla Usability Research

iOS Today View Concept

Imgur Upload Enhancement

Formative Resale Research

Dominicks Refresh