Measuring Experience Quality

Team Director

Background

Bluecore makes software that allows marketers at large brands to send emails to consumers. The UI is about 4 years old.

The Problem

Measuring a poor quality experience

Analytics and stakeholder feedback shed light on narrow platform usage and low overall engagement.

Our users required a lot of support from internal teams (success, support, product, solutions) to be successful. Something was wrong, but there was no clear way to measure the experience or to prioritize solutions.

My team kicked off a project to:

  • Create metrics to define the current state of the platform experience
  • Use data to define the the largest gaps and highest priority fixes to inform roadmap discussions
  • Measure the impact of any new features on the core experience

Leadership Challenge

The organization was tracking metrics for engineering quality and financials, but had no metric to track the quality of the user experience. This made it difficult to track the impact of the Product Design team, but more importantly, it made it difficult to prioritize existing experience enhancements over new feature work.

I took the initiative to create an Experience Benchmark metric that the product development organization aligned to. We defined the current state of the experience for the highest priority user flows and then set a threshold for what we believed was acceptable. This gave me:

  1. Empirical and quantifiable evidence of the largest experience issues in the highest priority flows for the platform
  2. The ability to roughly quantify the cost of the issues based on the amount of time internal teams spent completing the flows or supported the flows instead of the user completing them self-sufficiently
  3. Next Steps: I would like to look at the correlation of self-sufficiency in the prioritized user flows vs customer attrition.

Why this problem matters

Higher user engagement leads to higher revenue based on Bluecore's pricing model.

Lower engagement leads to  a loss of potential revenue and a higher cost to support each customer through internal enablement team effort.

The Process We Followed

  1. Align on which user flows/tasks define the ‘core’ experience
  2. Test core flows with representative users
  3. Analysis and reporting on current state and metrics
  4. Deliver prioritized list of enhancements to achieve KPIs

Agreeing on ‘Core’ flows was difficult

I define core flows as the critical paths. They are the highest priority flows that the majority of users need in order to be successful when using a platform.

We started by looking at product analytics to see the most and least frequently used parts of the platform, including both internal and external user data. We followed that with conversations with customer facing teams and product managers to see if the flows with the most engagement were the highest priority for success.

But…every product manager and engineering leader believed several flows in their own area of ownership were the most important. They focused on their vertical as instead of a broader platform experience.

To simplify and seek agreement, I asked to focus only on brand new users. In the future, we could focus on more complex flows, but to start with, the lead designer proposed a set of 10 core flows we believe every new user should be able to accomplish with 100% success.

Using a traditional usability testing approach

Screenshot of live benchmark test.
Screenshot of a benchmark test in progress.

The lead designer recruited people who had previously used digital platforms to create and send marketing emails, but had never used the Bluecore platform. These users would have the necessary domain knowledge expected of a platform user.

Spreadsheet showing recruiting criteria and responses for each participant.
Participant recruiting details.

Tests were conducted virtually, and each flow was measured for:

  • task success (binary and relative)
  • task length
  • pathway used
  • errors
  • perceived confidence

The results were mixed

There were many nuanced findings, but this visualization showing test task success was the most impactful and I shared the these findings with the Exec team.

Diagram showing task success across all participants.
Summary presentation.

A more detailed report shared with the Product team included:

  • A brief explanation of each issue
  • A severity rank
  • An LOE to fix (L/M/H)
  • A solution (if the solution was understood)

Prioritizing Next Steps For Product Development

With the high level visualization and the detailed summary in hand, I was able to more effectively negotiate during quarterly planning sessions and advocate for experience enhancements based on opportunity cost. The lowest hanging fruit were added to the UX roadmap immediately to be ready for engineering.

The Next Level Of Benchmark Maturity

Given more time I would like to have continued to evolve the measurement program. Thoughts included:

  • Retesting on a quarterly or bi-annual basis
  • Adding more complex tasks once foundational thresholds had been met
  • Giving each PM and Designer ownership over the measurement and reporting for their platform area