On October 16, 250 mobile product leaders from around the world gathered at Mobilize 2017 to discuss data and experimentation. A theme that continuously emerged throughout the day was the importance of experimentation and the impacts that it has on business teams, metrics, and testing culture.
Strava is a social network for athletes with a mobile app to track your exercise, participate in Challenges, share photos, and follow friends. Evelyn Cordner, Lead Engineer and Manager of Strava’s growth team, discussed the importance of enabling a testing culture to make data driven decisions and most effectively serve the athletes on their platform. A/B testing and iterative development are the backbone of their growth team, namely because proper testing allows them to:
- Gain a better understanding of our athletes
- Quantify impact of projects
- Enable decisive decision making
- Limit debate and speculation
- Prevent them from making changes that negatively affect KPIs
To avoid pushing uninformed changes to their product, Strava has developed a testing culture and processes to support their goals. The result? A “lean, mean, testing machine” that increased the number of tests that they are able to run per quarter by 10X. As Evelyn notes,
“the more we A/B test, the more we learn, the bigger of an impact we can have.”
So, how did Strava enable their growth team to run more A/B tests? They streamlined the historically time consuming process of turning an idea for an A/B test into an actionable test.
They accomplished this feat by focusing on 3 key areas:
- Release cycle
- Experimental Code (code that backs an A/B test)
Strava lacked a way to glean actionable analytics from A/B tests. This is not uncommon, as we often hear from industry leaders that adding analytics to A/B tests can be a time-consuming process. As Evelyn notes, this can even be more complex than building the A/B test itself.
Their solution was to leverage Apptimize, which offers tools for measuring A/B tests. Strava was particularly drawn to the simplicity of adding events and data visualization to experiments.
Secondly, Strava recognized the need to adjust their release cycle such that they could launch A/B tests as fast as possible. Their previous mobile release cycle required 2-4 weeks of time from code completion to launching the test to users, a time consuming process that limited the number of tests they could run and thus the impact that they could have.
Their goal was to cut the cycle down and run a lot of tests that were focused on similar parts of the app:
Their solution was to adjust their process to allow A/B tests to be merged directly into their release branch. Strava was careful to adjust their internal processes and requirements to ensure that this process change did not have a negative impact on their users.
Strava felt the strains of writing code to run A/B tests, as they were using engineering time polishing code that would be removed if the A/B test failed.
So what did they do? They redefined code standards for experimental code such that they were able to take shortcuts to get a test into the hands of users as fast as possible. At the end of the experiment, code is either deleted or updated. Evelyn repeated the bold statement that:
In order to control their new code standards that allowed for duplicated code, Strava established the Risk Budget Working Group, a cross functional team made up of engineers, customer support, QA, and product management. This team establishes and maintains error budgets to control product crashes and code quality, enabling the team to experiment within their error budgets.
In short, Strava has realized the importance and impact of A/B testing, which has pushed them towards testing and iterating quickly. The company’s emphasis on testing, learning, and growing combined with Apptimize’s suite of products has allowed their growth team to become a “lean, mean, testing machine.”
Watch Evelyn’s presentation from Mobilize 2017.