Test Motivation & Goals
In an effort to optimize our handset sales journey, we came to realize that a large portion of our customers tend to stick to buying a device from the same brand as their existing phone. We found this to be the case through Google Analytics reports (Device model combined with transactional data).
Furthermore, we also conducted an onsite survey on our device-listing page to understand customer behavior. The results of this survey also confirmed that brand stickiness is prevalent among customers looking to buy a new device online.
To further substantiate our assumptions, we conducted a panel discussion with a group of our retail colleagues to gain some understanding about what customers find important when choosing a new phone. Our colleagues in retail confirmed, that indeed, customers tend to stick to the same operating system and brand as their existing device.
As a result, we decided to take our first step towards personalizing our online sales journey. We decided to use our homepage for our first test. In specific, we used our top header banner position that is always in view for customers landing on the homepage. We use this banner position to communicate one generic “hero” offer during campaign periods. Thus, we wanted to test the impact of using a personalized offer (based on users device OS) in this banner versus a “one size fit’s all” offer.
Our expectation was to increase add to carts and sales when customers saw a personalized offer on the homepage due to their loyalty to one brand.
If we show “iOS” customers an Apple device banner instead of a Samsung device banner on the homepage header position then we will increase click through rate to the product detail page and add to carts due to the increased self relevancy of the offer.
In order to increase our chances of measuring an significant effect (test power) and to reduce the risk of sample pollution (if you have a multivariate test then there’s a bigger chance that you could be in the wrong variant if your cookies are deleted and the test duration will also be longer, which leads to higher pollution) we have tested one variant in the form of an A/B test.
The test group comprised of all mobile and tablet “iOS”users and was run through Optimizely. In order to target the right visitor group, Optimizely’s built-in segmentation was used to create an audience for “iOS” users.
The main change in the variant was the difference in the device offer in the main banner. In the control, visitors saw an offer for the Samsung S20 and in the variant, visitors saw an offer for the iPhone 11. In both cases, the call to action in the banner lead visitors to the respective devices’ product detail page.
The test ran for 18 days. We weren’t able to run the test for full weeks due to the duration of the campaign period.
+232,60% increase in banner clicks
+3,06% increase in PDP visits
+5,89% increase in Add to Carts
+26,31% increase in Sales
The positive effect was visible across all segments (acquisition & retention) and products (device+sim subscription & sim-only subscription).
Product sales data also indicates that the personalized banner had a positive impact on not only iPhone 11 sales but also other iPhone sales.
It is also interesting to see that the personalized banner did not impact non-Apple sales; this could imply that customers who do intend to buy an Android device are able to do so anyway.
Note: For homepage banner tests we tend to make add to carts our main test KPI (hence our hypothesis) since the final step of the transaction is quite further down the sales funnel. So it’s not realistic to find significant results on sales when homepage tests are run, because a very large uplift is needed to find winners.
In this case however, showing a personalized banner on the homepage not only had a significant effect on add to carts but also on overall transactions further down the sales funnel.
It could be that customers in the variant used their existing reference point of an Apple device to more easily process the sales offer in the banner.
This is especially interesting, because the control group saw an Android device which also had a large discount (and a free headset!) as part of the sales offer.
However, an existing Apple user could have less affinity to an Android device thus making it a less appealing offer for this customer group, and therefore, giving the customer less of an incentive to move forward in the funnel.
The results of this test have shown stakeholders the importance of personalizing customer journeys wherever possible. We have together found other positions on our website where simple personalization can massively enhance the customer journey.
We are currently working on a new homepage, in which offer personalisation will be built in. Furthermore, we have also experimented with personalising the order of the offers on our device listing page (i.e. Apple users see Apple devices on the top of the page instead of a mix of Apple and Android devices).
Why should this test win an award?
This test demonstrates that personalization does not always have to be elaborate. It is possible to start small, and pick relevant personalization criteria in order to maximize the customer’s online experience.
While it may seem obvious that customer’s tend to stick to their existing device brand (also looking at our own behavior) it is important to substantiate these assumptions with both qualitative and quantitative data so that we can create better quality tests and increase our chances of a winner! The above test ticks all these boxes and hence makes a case for a scalable approach to personalizing other aspects of the online sales journey.