Bluecore makes software that allows marketers to send emails to consumers. The platform has experimentation functionality that allows marketers to create A/B tests or holdout groups for those emails.
I found several severe issues with the platform’s experimentation features.
These were confirmed through digging into analytics and conversations with stakeholders.
Every email sent has a cost. For global brands with millions of customers sending multiple daily emails, that cost can be in the hundreds of thousands of dollars.
To ensure the highest ROI, marketers want to send emails with the highest conversion rate. To do that, ESPs have experimentation capabilities that allow marketers to test variations of the email content, layout, and subject line.
Improving email conversion rates increases both brand and Bluecore revenue. At scale, a 5% increase in CTR can have a large impact.
The experimentation features were functioning ‘correctly’, but there were user experience issues. I worked with a senior designer to dive deeper.
We did a deep dive into product analytics and led discussions with both heavy and lite users of the feature. That was paired with internal stakeholders discussions, a competitive analysis, and desk research on general experimentation best practices
We discovered several problems large and small, and prioritized the following for design:
Through an iterative design process including internal and external feedback, we defined solutions for each of these problems.
The time between an experiment launch and achieving a statistically significant result may be several weeks. With tens or sometimes hundreds of ongoing campaigns, it’s difficult for a user to remember what experiments are live and where they are.
Solution:
Create an Experimentation Hub where all experiments live. This allows all users to see ongoing experiments as well as their status without having to open each campaign.
There is no clear indication in the UI when an experiment reaches statistical significance or if it is inconclusive. Tests can run indefinitely.
Solutions:
We designed an email notification template that can be used for all experiment types which will be sent to the primary owner of the campaign when an experiment has concluded. These reminders mean user’s will not have to remember which campaigns need attention.
We moved experiment analytics out of an advanced reporting section and added it into the campaign as a tab. When an experiment is running, the status will be displayed in brief on the campaign summary tab and in detail on the experiment tab.
This change will place experiment analytics in a more visible place so that marketers will not have to hunt for the data and interpret the experiment status themselves.
Altering the content of an email campaign while an experiment is running impacts the validity of the data and resets the test. This is not explicitly stated anywhere, which has led to many invalid experiments.
Solutions:
Before editing a campaign, a marketer must create a draft version of the campaign. We included a small warning letting the user know the impact of their decision. This will reduce the number of unintended experiment resets and invalidated results.
While editing the draft version of the campaign, we added a similar message warning that publishing the campaign will reset the experiment. This was important because there can be a gap of time between creating the draft version and publishing it.
It is easy to create an experiment. It is difficult to create a valid experiment. The Bluecore platform allows users to make multiple selections when setting up an experiment, but those inputs are spread out across the campaign creation flow, and the impact of one decision on the next is unclear.
Solution:
We grouped the experimentation decisions in one section and will validate the impact of a change to a natural language input instead of input fields in terms of understanding, usability, usage, and outcomes.