customerinformationmanagement_pdp.jpg

Data Flow Engine

Nature of product/system: Data preparation tool

Persona: Expert IT programmer

Stage at which I join the project: After global interaction design was established

My role: (Primary) UX researcher , (Secondary) Interaction design mentor

Type of research: Formative, Qualitative, UX and capability definition

Research Technique(s): Remote interactive talk-aloud while using partially completed application 

Canvas-Sharp.jpg

What was the problem?

The tool Pitney Bowes provided for creating batch data flows and services (Enterprise Designer®) was a Windows application that was difficult to use and based on soon-to-be obsolete technology (Silverlight). A multi-year project was in place to recreate this functionality for the web. As new sets of functionality were developed, the team wanted user feedback and suggestion for functional and UX improvements.

What were the stakeholder assumptions?

In the first release, data flows and services created or edited with the new product were required to be backward-compatible with the existing tool. This put constraints on improvements that could be made to the UX.

The high-level interaction design was established before my joining the team, but the project was open to changes in the layout, icons, and operation of individual UI components.

How did you determine the forms of research to conduct?

We were in the definition stage of the project, so research focused on:

  • Vetting UX options to present existing functionality and UI elements, while improving user interaction.

  • Collecting new functionality that didn’t affect backwards compatibility.

  • Usability evaluations were conducted with internal and external expert users.

Which research method did you choose and why?

Remote, moderated interviews were conducted with expert users. Participants used portions of the actual UX code and were asked for their reactions, suggestions for updates, and requests for additions.

Sessions were conducted in small groups by company, so the participants knew each other. After each session, a list of feedback was produced, categorized by UI screen and system function. My observations of UX issues, with recommendations to address them, were included and reviewed.

Did it work? If not, what would you do differently next time?

The sessions were very successful at identifying areas of the UX needing improvement and business capabilities that should be added. Users were very forthcoming with useful and well thought out requests and suggestions.

How were your recommendations tied to both user and business value?

The participants were frequent, highly knowledgeable users of the present application. In many cases, their job effectiveness depended its capabilities and usability. All of the feedback was directly relevant to the business value of the tool.

Did you iterate and improve each cycle? 

Yes. A new round of research was scheduled soon after each new set of UX functionality became operational. We had enough external experts available to recruit new users for each session.

How was your research received?

The development team was enthusiastic about the research and looked forward to the input. Members of the development team observed each session.

How did your results bring impact?

Every set of findings was reviewed by the team and vetted to make certain they understood them accurately. Feedback was prioritized by its impact on usability and the number of participants who mentioned it. 

Those items that were either urgent or easy to implement were immediately added to the upcoming sprint. Things that didn’t seem urgent or for which we received conflicting user feedback were put aside until further input could be collected.