Why Automation Delivers Better Customer Experiences

I’m cur­rent­ly ren­o­vat­ing my house, and recent­ly need­ed some items for the kitchen. After search­ing on Google, I found the items at a retail­er that has a branch in my neigh­bour­hood. I clicked through to their site, arriv­ing straight onto a land­ing page fea­tur­ing my item (a nov­el­ty in itself, even today), and was able to check the stock in my local store, and then click and collect.

This fair­ly mun­dane exam­ple describes a basic expe­ri­ence expect­ed by today’s con­sumers. If we don’t click through to the items we’re expect­ing from the advert, we leave the site. If we can’t check stock for the item in a local store, or take advan­tage of click and col­lect, we also flee the retailer’s online presence.

What­ev­er we’re buy­ing, cus­tomers demand per­son­alised expe­ri­ences con­nect­ing across chan­nels and devices, guid­ing us along seam­less jour­neys toward a purchase—and long-term loy­al­ty. How­ev­er, there is still a major dis­con­nect between what con­sumers want and what busi­ness­es offer. Although near­ly two thirds of con­sumers feel loy­al toward brands that tai­lor expe­ri­ences to their pref­er­ences and needs, few­er than 10 per­cent of organ­i­sa­tions feel they are effec­tive­ly per­son­al­is­ing their mes­sag­ing.

This gap has sev­er­al inter­con­nect­ed caus­es. Most busi­ness­es recog­nise that per­son­al­i­sa­tion can increase their rel­e­van­cy with con­sumers, and improve their pri­ma­ry key per­for­mance indi­ca­tors (KPIs), but many sim­ply don’t know where to start. It can be hard to devel­op a roadmap and also prove suc­cess along the way. Some­times it’s even hard to get buy-in from the business’s lead­ers, who may lack faith that machines and algo­rithms can effec­tive­ly han­dle “human” jobs at scale.

On a more prac­ti­cal lev­el, it’s chal­leng­ing to devel­op a strat­e­gy for deploy­ing arti­fi­cial intel­li­gence (AI) in accor­dance with a brand’s unique needs and goals, as well as those of its cus­tomers. Even where that strat­e­gy exists, many busi­ness­es remain too siloed to exe­cute it. In most organ­i­sa­tions, incom­plete pro­files of iden­ti­cal cus­tomers are frag­ment­ed across var­i­ous depart­ments and mar­ket­ing chan­nels. For exam­ple, it’s frus­trat­ing when your tele­phone provider offers sup­port over Twit­ter, but can’t con­nect the con­ver­sa­tion that you’re hav­ing with the call cen­tre team, which hap­pened to me last year. As the 2017 Adobe Dig­i­tal Insights Sum­mit Sur­vey reports, close to 40 per­cent of organ­i­sa­tions work with three or more ana­lyt­ics plat­forms, three or more attri­bu­tion plat­forms, or three or more data man­age­ment plat­forms. With each tool focused on a spe­cif­ic ele­ment of the data, cus­tomer pro­files are bound to be fragmented.

The good news is that automa­tion tech­nol­o­gy pro­gress­es every year. Tools such as Adobe Tar­get now pro­vide user-friend­ly solu­tions to many of these prob­lems. By lever­ag­ing a spec­trum of auto­mat­ed per­son­al­i­sa­tion, rang­ing from rules-based per­son­al­i­sa­tion all the way to ful­ly auto­mat­ed rec­om­men­da­tions, busi­ness­es are hand­ing the repet­i­tive heavy-lift­ing over to machines, free­ing their mar­ket­ing experts to focus on the inno­v­a­tive think­ing that tru­ly dif­fer­en­ti­ates their brand.

Here are four rea­sons why your organ­i­sa­tion needs to embrace automa­tion to achieve deep­er, faster insights, and stronger return on invest­ment (ROI).

Resources are assigned to the activ­i­ties that dri­ve con­ver­sions exact­ly when they’re needed.

Blind spots pose a major prob­lem for any busi­ness lag­ging behind on automa­tion. Auto-allo­ca­tion iden­ti­fies con­ver­sion-dri­ving activ­i­ties and seam­less­ly routes traf­fic to sta­tis­ti­cal­ly sig­nif­i­cant winners.

The lat­est gen­er­a­tion of tools can not only A/B test mul­ti­ple mes­sages, they can also shift traf­fic to the best-per­form­ing expe­ri­ence, in real time, as they learn. This “mul­ti-arm ban­dit” test­ing approach runs tests on 20 per­cent of your traf­fic, while deliv­er­ing opti­mised expe­ri­ences to the oth­er 80 per­cent. Thus con­ver­sions and ROI steadi­ly increase while you’re actu­al­ly in the process of run­ning the automation.

Say you’re run­ning a con­ven­tion­al A/B test on an audi­ence of 300,000 peo­ple. The test allo­cates Expe­ri­ence A to 100,000 peo­ple, Expe­ri­ence B to anoth­er 100,000, and Expe­ri­ence C to the final 100,000. Expe­ri­ence C turns out to be the best per­former, mak­ing you £700,000, while Expe­ri­ence A makes £600,000, and Expe­ri­ence B makes only £550,000. By run­ning this stan­dard A/B test, you made £50,000 more than if you’d just served Expe­ri­ence A to every customer.

Watch what hap­pens with an auto-allo­cat­ed mul­ti-armed ban­dit A/B test on the same audi­ence of 300,000, with the same dif­fer­ences in prof­itabil­i­ty among the three expe­ri­ences. After two weeks of test­ing, the algo­rithm learns that Expe­ri­ence C is the most prof­itable, so it imme­di­ate­ly begins direct­ing the bulk of the traf­fic to that expe­ri­ence. By the end of the two weeks, 30,000 peo­ple have been served Expe­ri­ence A, anoth­er 30,000 have seen expe­ri­ence B, but 240,000 peo­ple have inter­act­ed with Expe­ri­ence C, which net­ted £1,680,000 as a result. The mul­ti-armed ban­dit test made £225,000 more than using Expe­ri­ence A alone.

Swiss­com recent­ly used auto-allo­ca­tion to deter­mine which phone mod­el to pic­ture as the default image on its prod­uct details page. They found that a sil­ver iPhone deliv­ered a 37 per­cent lift in con­ver­sions over the grey mod­el they’d been using. This might not sound rad­i­cal, but it high­lights how some­thing sim­ple can have an impact on engage­ment and sales. Most com­pa­nies are sit­ting on this kind of poten­tial with­out nec­es­sar­i­ly real­is­ing it.

Rec­om­men­da­tions become more tar­get­ed and eas­i­er to scale.

We’ve all wit­nessed the pow­er of per­son­alised rec­om­men­da­tions on sites such as Ama­zon and Net­flix. No mat­ter what indus­try you’re in, you like­ly have a large col­lec­tion of dig­i­tal assets that need to be rec­om­mend­ed at scale. The more pre­cise­ly you tai­lor these rec­om­men­da­tions, the more you’ll be able to serve cus­tomers the prod­ucts, videos, con­tent, and oth­er assets they’ll want to inter­act with.

One of the most pow­er­ful tools for achiev­ing this is an algo­rithm known as “item-based col­lab­o­ra­tive fil­ter­ing.” By rank­ing your dig­i­tal assets by pop­u­lar­i­ty, recen­cy, and frequency—as well as in rela­tion to many attrib­ut­es with­in each customer’s pro­file and pur­chase history—this algo­rithm serves rec­om­men­da­tions with the high­est like­li­hood of dri­ving clicks, sales or what­ev­er your par­tic­u­lar goals are. You can even add man­u­al rules, based on your own expert insights, and the algo­rithm auto­mat­i­cal­ly fac­tors those rules into its deci­sions, too.

The more this algo­rithm learns from cus­tomer inter­ac­tions, the more it begins rec­om­mend­ing com­bi­na­tions and per­mu­ta­tions of assets that nev­er occurred to you before. These com­bi­na­tions might make no sense from a tra­di­tion­al view­point, but because they’re inspired by your cus­tomers’ view­ing and pur­chase pat­terns, they’ll dri­ve unde­ni­able results.

Swiss­com test­ed Adobe’s item-based col­lab­o­ra­tive fil­ter­ing in its smart­phone app, where the algo­rithm ranked the self-ser­vice tiles on its help page. Even though the tiles’ names and con­tent remained the same, sim­ply reorder­ing them pro­duced an imme­di­ate 2.94 per­cent lift in clicks. Swiss­com is now test­ing the algo­rithm on its news sto­ries page, and on var­i­ous prod­uct rec­om­men­da­tion pages.

Sig­nif­i­cant offers achieve the great­est impact when dri­ven by auto­mat­ed cus­tomer profiling.

Although it’s cru­cial to lever­age machine learn­ing to opti­mise deliv­ery of large libraries of con­tent, some­times the reverse is true: You’ve got sev­er­al pieces of cor­ner­stone con­tent that need to be deliv­ered; when work­ing with a small num­ber of offers, a tech­nique known as “full-fac­to­r­i­al machine learn­ing” ensures that each offer is matched to the right vis­i­tor at the right moment.

This tech­nique uses a range of com­ple­men­tary algo­rithms (e.g., resid­ual vari­ance, ran­dom for­est, and life­time val­ue), along with data from vis­i­tors’ pro­files. The mod­el also learns from chang­ing pop­u­la­tion behav­iour in real time, enabling it to rebuild itself and weight each of those algo­rithms dif­fer­ent­ly in response to spe­cif­ic behav­iour­al triggers.

For exam­ple, look at Swisscom’s home­page. You’ll see it’s more min­i­mal, in terms of con­tent, than the home­pages of many of its com­peti­tors. This is because Swiss­com learns from its cus­tomers, serv­ing only the most rel­e­vant con­tent to new vis­i­tors. You’ll see only the offers that are sta­tis­ti­cal­ly most like­ly to catch your inter­est, which makes you more like­ly to click.

With auto­mat­ed tar­get­ing, the right expe­ri­ence always reach­es the right customer.

Effec­tive per­son­al­i­sa­tion is also about deliv­er­ing holis­tic expe­ri­ences tai­lored to each cus­tomer. To max­imise your clicks and con­ver­sions, you need to be able to A/B test not only pages or prod­ucts, but also large-scale vari­a­tions in con­tent, nav­i­ga­tion, lay­out, tim­ing, and many oth­er inter­re­lat­ed attributes.

That’s why mul­ti-armed ban­dit test­ing aims not only for indi­vid­ual pieces of con­tent, but for whole cus­tomer expe­ri­ences, test­ing and tar­get­ing of entire app or web­site lay­outs, as well as func­tion­al­i­ty such as per­son­alised nav­i­ga­tion and fea­tured offers—all from one con­ve­nient con­trol panel.

All these tech­niques boil down to the same cen­tral prin­ci­ples: Speak to your cus­tomers the way they want to be spo­ken to. Start sim­ple, by automat­ing A/B test­ing to achieve greater insights and ROI. Keep scal­ing automa­tion beyond sim­ple per­son­al­i­sa­tion of assets to deliv­er holis­tic expe­ri­ences tar­get­ed around each customer’s needs.