Optimising the Customer Experience: Conversion Rate Optimisation Part 2

Opti­mis­ing your cus­tomers’ expe­ri­ences on your website—and your mobile website—is the first step toward cre­at­ing fric­tion­less cus­tomer jour­neys, and mak­ing sure your cus­tomers con­nect per­son­al­ly with your brand. Every time a user bounces from your site, you’ve wast­ed money.

In this series of arti­cles, I’m walk­ing through the entire process of opti­mis­ing user expe­ri­ences. In the first arti­cle of the series, I talked about cre­ative con­sis­ten­cy—mak­ing sure the con­tent cus­tomers see when they arrive on your site fol­lows direct­ly from the adver­tis­ing cre­ative they clicked to get there.

But cre­ative con­sis­ten­cy is just the begin­ning. You’ve still got a lot of deci­sions to make in terms of how to organ­ise the sec­tions of the land­ing page, how to draw atten­tion to the most sales­wor­thy attrib­ut­es of your prod­ucts and offers, what graph­ics to use to present your offer, and how to man­age flow through­out the page.

That’s where A/B test­ing and mul­ti­vari­ate test­ing come in.

Build­ing recipes

A/B test­ing puts mul­ti­ple ver­sions of a page—or a piece of content—head to head against each oth­er. Mul­ti­vari­ate test­ing often builds on A/B test­ing, enabling you to iso­late the ele­ments of the page that are actu­al­ly dri­ving user interaction.

The prin­ci­ple behind mul­ti­vari­ate test­ing is sim­ple: Gen­er­ate a vari­ety of ver­sions of your page, in terms of con­tent, lay­out, offer pre­sen­ta­tion, and even text and but­ton styles, and see which of those ver­sions works best for your cho­sen goals. In oth­er words, mul­ti­vari­ate test­ing can help you build dif­fer­ent “recipes” for user experiences—where on your land­ing page the “killer ques­tions” should be placed, how many steps the onsite con­ver­sion process should take, what kind of val­i­da­tion you should require, and even which col­ors and calls to action but­tons to use.
Mul­ti­vari­ate test­ing often gen­er­ates a large num­ber of vari­a­tions that need to be tested—requiring a lot of site traf­fic in order to gath­er the nec­es­sary results. One way to speed up this process is to run what’s known as a par­tial-fac­to­r­i­al test—that is, test a sub­set of all these vari­a­tions of elements.

Com­par­ing recipes

Once you’ve gen­er­at­ed your “recipes,” it’s time for some test­ing. In this phase, you’ll actu­al­ly show each of ver­sion of your page to real cus­tomers, and mea­sure the ways they respond to the var­i­ous lay­outs, offers, and but­ton and text styles you’re testing.

You’ll be mea­sur­ing how users respond not only to each “recipe,” but also to each indi­vid­ual ele­ment on the page, from but­tons and text to images and ad cre­ative. By com­bin­ing a large num­ber of indi­vid­ual use cas­es, you can deter­mine, in real time, which recipe gen­er­ates the most conversions.

Many CMOs still trust their mar­ket­ing and web teams to han­dle this kind of opti­mi­sa­tion man­u­al­ly, on the fly—but that’s a waste of mon­ey, because it means pay­ing for hours of work that’ll nev­er be used. It’s much more effi­cient to use a test­ing plat­form, take the guess­work out of land­ing page design, and let your audi­ence, and the data, tell you which design will gen­er­ate max­i­mum conversions.

The effec­tive­ness of testing

At our recent cus­tomer event, the Adobe Sum­mit EMEA in Lon­don, a num­ber of brands were gra­cious enough to share some of the wins they’ve seen from their test­ing programs.

Will Harmer, Senior Man­ag­er of Insights and Opti­mi­sa­tion at EE, the largest mobile net­work oper­a­tor in the Unit­ed King­dom, described how he and his team have been opti­mis­ing their prod­uct detail page—the sin­gle most impor­tant page on their shop site. Cus­tomers are poised to make a major buy­ing deci­sion on this page, and an improve­ment by even a frac­tion of a per­cent can deliv­er big finan­cial returns.

Harmer and his team decid­ed to use mul­ti­vari­ate test­ing to test a the­o­ry they’d been work­ing on.

EE cus­tomers, Harmer knew from ear­li­er analy­ses, have many choic­es between phones and plans—and cus­tomers often find those choic­es over­whelm­ing. Har­mon and his team believed that by show­ing vis­i­tors the most-pur­chased phones and plans on their web­site, this social proof would boost vis­i­tors’ con­fi­dence in their plan of choice, lead­ing to conversions.

Harmer and his team test­ed this the­o­ry using mul­ti­vari­ate test­ing, they imme­di­ate­ly saw a 9 per­cent lift in orders per unique visitor.

The team is now explor­ing how social proof can guide cus­tomers through oth­er com­plex deci­sions. For more infor­ma­tion on this con­cept, take a look at this arti­cle from my col­league Blair Keen on the Econ­sul­tan­cy blog. For many oth­er test­ing exam­ples, you can access the full record­ing of the Sum­mit EMEA ses­sion here.

Once you’ve opti­mised your land­ing pages and the site expe­ri­ence, the next step is to start think­ing about the con­tent you’ll need to sup­port the con­ver­sion fun­nel on your site, and all oth­er aspects of the onsite expe­ri­ence. This will help to ensure that as many users as pos­si­ble will not only enter the con­ver­sion fun­nel but get all the way through it and become customers.

This is one aspect of what we call onsite con­tent veloc­i­ty—and it’s exact­ly what we’ll be exam­in­ing in the third and final arti­cle of this series. See you there.