Improving signup conversion

In 2021, the Marley Spoon product org put efforts into rebuilding the entire web application on more modern technologies that would allow for iteration and scalability. The existing tech environment we were working in was causing constant pressure and a bottleneck on delivering customer value as it was incredibly hard to work with and iterate on. Alongside this migration to a new technology, our product team opted to capitalize on this transition and look at ways to test and learn on the existing conversion funnel, bringing these new learnings into the rebuild from day one. The outcome of our work was an articulated process for AB testing, including a workflow, structured documentation for experiments and helping shift the perspective of iterative design value within the product organisation.

Client

Marley Spoon

Role

Product Design

Year

2021

To gain focus on the biggest pain points in the conversion funnel, we spent time first exploring the problem space and framing the problems to be solved. We looked into existing artifacts for insight—customer sentiment feedback, qualitative surveys sent to new customers, quantitative behavioral usage data in the funnel & ran some in-situ surveys on multiple steps of the funnel to understand where customers were having trouble. We ran a service blueprint workshop to capture how touchpoints and processes interact with the customer in the funnel. We brought in experts from all different teams to help us understand a wide range of technologies and interconnected processes.

We collated these insights on Airtable and mapped pain points to overarching themes. Some of the large themes we saw emerging were related to mental math associated with selecting plans, visualizing a good product fit throughout the sign up process and human challenges around errors and language. We also took the opportunity to run a heuristic evaluation of the entire funnel to look for usability issues and visual areas for improvement, based on some best UX practices.

Our team focused on aligning on a workflow and process that would allow us to be lean and move quickly while researching and testing experiments. As a team of 3, we all assigned responsibilities to the most relevant tasks and levels of involvement dependant on the stage of an experiment. I worked on putting together a template that we would use to document our experiments—a source of truth that is accessible to anyone within the company, and can be understood without any prior knowledge. We captured background on why we were focused on an experiment, a succinct problem statement, a testable hypothesis, information about audience and targeting as well as exactly what kind of metrics we will use to determine success.

Our first successful experiment was looking into the problem of visualizing a good product fit. Customers tended to struggle with understanding the very first step of the funnel—the plan selection. Potential customers are slowed down by having to make many choices right at the beginning of the funnel. Presented are two plans with options to select how many meals you would like, along with an associated cost broken down by meal, shipping and weekly cost. Our hypothesis was that by making it easier to digest what is on offer and connect that offer with the associated cost without doing math, we would see more users moving to the second step in the funnel.

The experiment altered the design by replacing the side by side plan options with only one at a time, posing simple questions to the user and reframing the first step as ‘building’ your own plan rather than fitting into pre-defined plans. I took a close look at the language we were using and simplified it as much as possible to be more human and accessible. When it came to execution, I supported our engineer by supplying highly detailed documentation and implementation details on different flows and css details.

We previously had been using Google Optimize for small experiments, but as our need for more complexity grew, we onboarded a new AB testing tool called AB Tasty. In collaboration with the team, we setup experiments together and synced weekly as experiments were running to asses progress and inform experiments we were already working on in the backlog.