Improving Experimentation

Team Director

Background

Bluecore makes software that allows marketers to send emails to consumers. The platform has experimentation functionality that allows marketers to create A/B tests or holdout groups for those emails.

The Problem

Improving experiment engagement and outcomes

I found several severe issues with the platform’s experimentation features.

These were confirmed through digging into analytics and conversations with stakeholders.

  1. Customers do not know the features exist
  2. Customers don’t use the features
  3. Customers are not using the feature correctly

Leadership Challenge

When a feature ‘doesn’t work’ designers frequently go straight to a full redesign. This project was a great opportunity to work with the product design team on tactical solutions. We discussed strategies for understanding where an experience breaks across the breadth of the user workflow. This might be within the feature, next to the feature, or outside the feature. We then worked to generate solutions that helped our users achieve specific goals as opposed to ‘fixing’ the feature.

Why This Problem Matters

Every email sent has a cost. For global brands with millions of customers sending multiple daily emails, that cost can be in the hundreds of thousands of dollars.

To ensure the highest ROI, marketers want to send emails with the highest conversion rate. To do that, ESPs have experimentation capabilities that allow marketers to test variations of the email content, layout, and subject line.

Improving email conversion rates increases both brand and Bluecore revenue. At scale, a 5% increase in CTR can have a large impact.

Our Approach To Improving Experimentation

The experimentation features were functioning ‘correctly’, but there were user experience issues. I worked with a senior designer to dive deeper.

We did a deep dive into product analytics and led discussions with both heavy and lite users of the feature. That was paired with internal stakeholders discussions, a competitive analysis, and desk research on general experimentation best practices

We discovered several problems large and small, and prioritized the following for design:

  1. Users forget about ongoing experiments
  2. Users are unsure when to close an experiment
  3. Users alter campaigns with live experiments
  4. Users don’t understand experiment best practices

Through an iterative design process including internal and external feedback, we defined solutions for each of these problems.

1. Users tend to forget about ongoing experiments

The time between an experiment launch and achieving a statistically significant result may be several weeks. With tens or sometimes hundreds of ongoing campaigns, it’s difficult for a user to remember what experiments are live and where they are.

Solution:‍

Create an Experimentation Hub where all experiments live. This allows all users to see ongoing experiments as well as their status without having to open each campaign.

2. Users are unsure when an experiment is finished

There is no clear indication in the UI when an experiment reaches statistical significance or if it is inconclusive. Tests can run indefinitely.

Solutions:

We designed an email notification template that can be used for all experiment types which will be sent to the primary owner of the campaign when an experiment has concluded. These reminders mean user’s will not have to remember which campaigns need attention.

We moved experiment analytics out of an advanced reporting section and added it into the campaign as a tab. When an experiment is running, the status will be displayed in brief on the campaign summary tab and in detail on the experiment tab.

This change will place experiment analytics in a more visible place so that marketers will not have to hunt for the data and interpret the experiment status themselves.

3. Users can alter live experiments invalidating the data

Altering the content of an email campaign while an experiment is running impacts the validity of the data and resets the test. This is not explicitly  stated anywhere, which has led to many invalid experiments.

Solutions:

Before editing a campaign, a marketer must create a draft version of the campaign. We included a small warning letting the user know the impact of their decision. This will reduce the number of unintended experiment resets and invalidated results.

While editing the draft version of the campaign, we added a similar message warning that publishing the campaign will reset the experiment. This was important because there can be a gap of time between creating the draft version and publishing it.

4. Users don’t understand experiment best practices

It is easy to create an experiment. It is difficult to create a valid experiment. The Bluecore platform allows users to make multiple selections when setting up an experiment, but those inputs are spread out across the campaign creation flow, and the impact of one decision on the next is unclear.

Solution:

We grouped the experimentation decisions in one section and will validate the impact of a change to a natural language input instead of input fields in terms of understanding, usability, usage, and outcomes.