Mobile A/B Testing at LinkedIn: The Importance of Different Platforms

Apptimize

A few weeks ago, LinkedIn’s engineering team published a great article on their method of A/B testing. We had the pleasure of talking with Akhilesh Gupta, LinkedIn’s Senior Software Engineer for Mobile, over some chocolate coconut water to dive deeper on a few follow-up questions. Akhilesh leads LinkedIn’s server team for mobile and happens to be the man who authored the A/B testing article. We talked about how different web A/B testing is from mobile and Android from iOS. Read the full interview for the details!

akhilesh

Follow Akhilesh on LinkedIn or @agupta03

Apptimize: How did regular mobile A/B testing and experimentation begin at LinkedIn?

Akhilesh: When the mobile team began A/B testing, the web team had already been doing it for a while and been successful with it. My team adapted the web platform to mobile which is very different because of the innate differences of the platforms. Mobile A/B testing works better by reducing the burden on the client to prevent bugs and latency. The philosophy is that the client should not concern itself with the logic regarding who sees what and it instead, sits on the server.

Another difference is that we can segment tests based on mobile-specific information. OS versions (i.e. iOS 6 vs. 7, Android vs. iOS) and client version (specific versions of the LinkedIn app) greatly affect the experience on mobile. On web, everyone is on the latest version of your page so you never have to worry about that. And while you do have to make sure your site looks right on all browsers, OS and browser differences don’t completely change user experience with the site like they do on mobile.

Apptimize: The article published on the LinkedIn engineering blog talked a lot about the benefits of view-based JSON for server side logic. What made LinkedIn test in this way and are there any disadvantages?

Akhilesh: In order to reduce the burden on the app client, LinkedIn had moved to view-based JSON even before we started A/B testing. We were finding that having a lot of logic on the client was causing bugs that we sometimes could not easily fix on the fly. So we developed this system where the client held a set of predefined views and the server controlled which views were rendered to the user. The logic was held on the server. This solved our problem of a heavy client and also made implementing A/B testing much easier than it would have been otherwise. Additionally, our experimentation team had already been very successfully doing A/B testing for the web. There were many things that needed to be customized for mobile since A/B testing on mobile is very different than on the web, but it meant that much of the infrastructure was happily existing for us to start A/B testing on mobile.

There are some disadvantages to this method. We’ve found that it’s sometimes useful for the client to know what it’s rendering. For example, say we want to test making job descriptions appear in red in some cases. With our current system, we’d have to create a new view-type and have the server trigger the red view because the client isn’t able to make the background red on the fly. This is fine except that you may end up with a lot of different views. We’re thinking about moving to a hybrid model so that the client can make some light-weight decisions such as this.

Apptimize: What is the most surprising thing you’ve learned from A/B testing?

Akhilesh: Behavior patterns on different platforms are very different. For example, the way users interact with notifications on Android is very different from that on iOS. On Android, you can do a lot within the notification without actually opening the app whereas notifications on iOS usually force you to launch the app in order to respond to the notification. This means that many users in Android visit the homepage far less often. (Note from Apptimize: read more about this in our recent post on the differences between Android and iOS A/B testing.)

Apptimize: How do you manage A/B tests in an organization as large as LinkedIn?

Akhilesh: We have three kinds of review processes at LinkedIn — product reviews, design reviews, and engineering reviews. At least one representative from each of the three groups are present during every review so ideas for A/B tests can be brought up and discussed. We also have an experimentation team that includes many data scientists that have a great deal of testing experience.

This is great because all parties will have different points of view and ideas. For example, the product team may perform many user tests that weed out several versions before we start A/B testing. The experimentation team might suggest concepts such as connection density (how many contacts an individual has) and have recommendations on how to leverage that information for testing.

We also beta test our own apps within the company before launching. Android allows staged rollouts, but we haven’t figured out how to do so on iOS so we test really thoroughly before launching new client versions.

Endnote from Apptimize: Thank you Akhilesh for the time! We’re so happy to learn from companies who are so experienced with mobile A/B testing. As a note to our readers, Apptimize does enable our customers to render new visual templates on the fly after a client version has shipped, and we also enable staged rollout for iOS. Learn more about our features here.

About Apptimize

Apptimize is an innovation engine that provides A/B testing and feature release management for native mobile, web, mobile web, hybrid mobile, OTT, and server. Industry leaders like HotelTonight, The Wall Street Journal, and Glassdoor have created amazing user experiences with Apptimize.

Thanks for
reading!