BANK pic.JPG

 Bank Fraud Detection - System Troubleshooting

Nature of product/system: Data scrubbing and pattern matching tool

Persona: Dedicated fraud analysts and IT programmers

Stage at which I join the project: After global interaction design was established

My role: (Primary) Part-time Interaction Designer (Primary), UX researcher (Secondary)

Type of research: Formative, qualitative, business process and capability definition

Research Technique(s): Remote collaborative; Problem/solution enumeration 

Screener_Sharp.jpg

What was the problem?

A bank fraud detection system is only as good as its matching rules and lists of bad actors. Lists are updated at least once per day from many sources, including the bank’s staff. Robust troubleshooting is required to minimize false positives and omissions.

This research identified the types of errors that can occur in the system, their causes, and how to address them. The system was originally developed for MUFG Bank of Japan. This exercise collected enhancements needed to make it a standard Pitney Bowes offering.

What were the stakeholder assumptions?

  • The participants had enough experience with the system to enumerate the range of errors the system produced and their causes.

  • The database platform on which the system was based was robust enough to provide the needed diagnostics and reports.

  • Our customer's IT users have the knowledge to maintain matching rules and diagnose poor data feeds.

  • If given the right visualizations, customer business analysts would be able to determine the sources of system errors.

Which method(s) did you choose and why?

A method was devised to prompt research participants for ways the system could fail. For each, they listed:

  • The ways to detect each type of system failure

  • The possible sources of each type of failure

  • How to fix each cause.

  • How to prevent that category of failure going forward.

ScreenerClose-up.jpg

The exercise was conducted online as a group using MURAL. Conventions were created to ensure the method didn’t get in the way of the participants recording their thoughts. Time was reserved for each person to present and clarify their ideas with the group.

Did it work? If not, what would you do differently next time?

Though the method did not result in an exhaustive list of failure types, it led to the identification of all categories for failures and a troubleshooting decision tree for the system. This enabled the project to create troubleshooting features for each system module. It also provided enough understanding to guide the design of visualizations that help BAs diagnose data configuration errors.

How were your recommendations tied to both user and business value?

The exercise enabled the project to create a troubleshooting decision tree that helped the project see the categories of failures and determine how the system could address each.

Did you iterate and improve each cycle?

Not in this exercise. However, each troubleshooting category spawned iterative design and test cycles.

How was your research received?

It was used as the basis for product planning and backlog development going forward.

How did your results bring impact?

if not for this exercise numerous troubleshooting capabilities would not have been identified and added to the system.