Solving the Personalisation Gap with Automation

Many adver­tis­ers think they’re deliv­er­ing bet­ter cus­tomer expe­ri­ences than ever before. In the 2017 Adobe Dig­i­tal Insights report deliv­ered at this year’s EMEA Sum­mit, a full 58 per­cent of adver­tis­ers said they feel their abil­i­ty to deliv­er great cus­tomer expe­ri­ences is improving.

But con­sumers don’t nec­es­sar­i­ly agree. Only 38 per­cent of cus­tomers say brands are actu­al­ly get­ting bet­ter at giv­ing them the expe­ri­ences they want. One major rea­son for this is that near­ly one third of retailers—by their own admission—have lim­it­ed or no capa­bil­i­ty to sup­port per­son­al­i­sa­tion efforts at scale.

Per­son­al­i­sa­tion car­ries a host of chal­lenges, it’s true. Many brands sim­ply don’t know where to begin increas­ing their rel­e­van­cy. The process seems dif­fi­cult; no two com­pa­nies will fol­low the same roadmap, and that makes it chal­leng­ing to deter­mine the right key per­for­mance indi­ca­tors (KPIs) and mea­sure suc­cess. Even when a brand recog­nis­es the impor­tance of per­son­al­i­sa­tion, many of their chan­nels, assets and pro­file data still remain uncon­nect­ed, poor­ly aligned for coor­di­nat­ed action.

At root, though, the con­cept behind per­son­al­i­sa­tion is sim­ple: Use CRM and behav­iour­al data to know your cus­tomers, deliv­er them the offers, rec­om­men­da­tions, and user inter­faces (UI) they want, and use their feed­back to improve and auto­mate even bet­ter experiences.

Here are four sim­ple tac­tics you can begin using right now, as your organ­i­sa­tion devel­ops its own unique strat­e­gy along the spec­trum from man­u­al to auto­mat­ed personalisation.

Auto-allo­cate test traffic

A/B test­ing has come a long way in the past year or so. The lat­est soft­ware can now auto­mat­i­cal­ly shift traf­fic to the best-per­form­ing expe­ri­ence in real time, as it learns—so you nev­er have to man­u­al­ly adjust your tests. This is known as the mul­ti-arm-ban­dit test­ing approach, because it’s con­stant­ly run­ning tests on 20 per­cent of traf­fic while deliv­er­ing opti­mised expe­ri­ences to the oth­er 80 percent.

When using this approach, you don’t have to cal­cu­late a run time or sam­ple size for your test up front, as you would in a stan­dard test­ing mod­el. In the old world, you’d allo­cate a third of your traf­fic to each of three dif­fer­ent experiences—and while each expe­ri­ence would get the same num­ber of vis­i­tors, the rev­enue for each would differ.

In an auto-allo­cat­ed A/B test, on the oth­er hand, the soft­ware auto­mat­i­cal­ly redi­rects traf­fic toward bet­ter-per­form­ing experiences—raising your rev­enue imme­di­ate­ly, in real time, as the test­ing algo­rithm learns. This speed is par­tic­u­lar­ly use­ful when you’re run­ning a cam­paign pegged to a time-sen­si­tive event (like a hol­i­day or a new prod­uct launch) and need to dis­cov­er a win­ning expe­ri­ence right away.

And yet, while “three-armed ban­dit” A/B test­ing is a pow­er­ful one-size-fits-all approach, it’s only one of the array of tac­tics you should be look­ing at.

Pro­vide col­lab­o­ra­tive recommendations

Whether you’re in retail or any oth­er indus­try, you’ve prob­a­bly got a large col­lec­tion of dig­i­tal assets that need to be per­son­alised at scale. You’re look­ing for a more pre­cise way to deliv­er rel­e­vant rec­om­men­da­tions about those assets to your visitors—serving them the prod­ucts, videos, con­tent, and oth­er assets they’ll actu­al­ly want to inter­act with.

One pow­er­ful tac­tic for achiev­ing this is known as_ item-based col­lab­o­ra­tive fil­ter­ing_. It’s item-based because it groups items to make bet­ter rec­om­men­da­tions; and it’s col­lab­o­ra­tive because it groups them accord­ing to an array of fac­tors, includ­ing con­tent sim­i­lar­i­ty, pop­u­lar­i­ty, time­li­ness, fre­quen­cy, and sim­i­lar­i­ties with attrib­ut­es in cus­tomers’ pro­files. You can also add man­u­al rules, based on your own insights—and the algo­rithm will auto­mat­i­cal­ly fac­tor those rules in, too.

As this algo­rithm learns from your cus­tomers, it’ll rec­om­mend com­bi­na­tions of assets you nev­er would have thought of, because they might not make sense from a con­ven­tion­al standpoint—but the results will speak for themselves.

Auto­mate deliv­ery of the per­fect offer

In a sense, this is the reverse of the pre­vi­ous section—for times when, instead of a huge library of assets that need to be rec­om­mend­ed in clus­ters, you’ve got a few high­ly sig­nif­i­cant pieces of con­tent that need to be served to the right users at the right times. When you’re work­ing with a small num­ber of offers like this, full-fac­to­r­i­al machine learn­ing serves as a pow­er­ful tool for match­ing offers to visitors.

This style of machine learn­ing uses mul­ti­ple algo­rithms (e.g., resid­ual vari­ance, ran­dom for­est, and life­time val­ue) to pro­vide ranked offers to vis­i­tors based on their pro­file data. As pop­u­la­tion behav­iour changes, the mod­el rebuilds itself auto­mat­i­cal­ly to weight each of the algo­rithms differently—in fact, it can even pro­vide the option to weight cer­tain vari­ables more high­ly in response to spe­cif­ic behav­iour­al triggers.

All three of the tac­tics described above can be high­ly useful—but the real impact only appears when you use them all in synchrony.

Auto-tar­get on an expe­ri­ence basis

As all these exam­ples make clear, effec­tive per­son­al­i­sa­tion isn’t so much about deliv­er­ing the right assets as about deliv­er­ing the right expe­ri­ences. Your automa­tion solu­tion needs to be able to A/B test not just pages or prod­ucts, but holis­tic expe­ri­ences involv­ing vari­a­tions in con­tent, nav­i­ga­tion, lay­out, tim­ing, and a host of oth­er inter­re­lat­ed attributes.

We at Adobe recent­ly went through exact­ly this process with our cor­po­rate home­page. Not only did we A/B test vari­a­tions in con­tent blocks—our soft­ware auto­mat­i­cal­ly mixed up the entire lay­out of the page in a wide vari­ety of ways, and A/B test­ed each vari­ant using the mul­ti-arm ban­dit approach. The result wasn’t just one clear win­ning lay­out, but a range of win­ners, each opti­mised for a par­tic­u­lar audi­ence segment—an improve­ment that’s already dri­ven a major lift in revenue.

In short, the key is to use auto­mat­ed per­son­al­i­sa­tion to speak to cus­tomers in the ways they want to be spo­ken to. But tac­tics like these are just the begin­ning of a full-pow­ered cus­tomer expe­ri­ence strat­e­gy. The com­ing arti­cles in this series will explore a diverse array of oth­er ways you can use data to sur­prise and delight your cus­tomers with great expe­ri­ences at every turn. If you’re inter­est­ed in learn­ing more about these per­son­al­i­sa­tion capa­bil­i­ties, then please take a look at Adobe Tar­get—the per­son­al­i­sa­tion engine of the Adobe Expe­ri­ence Cloud—which is where we’re bring­ing these excit­ing capa­bil­i­ties to market.