Optimising FSI Customer Experiences, Part 5: Testing, Learning, and Improving
Digital transformation in the financial services industry (FSI) is not an “either/or” proposition. It’s an ongoing journey of continual improvement and enhancement. You might get the data, content, machine learning, and delivery components exactly right, but if you don’t keep learning and improving, you’re going to stagnate, and your competitors will soon overtake you.
This drive toward continual improvement can only flourish in a culture of ongoing learning and growth. And many top performers are starting to get that message. In a recent survey of FSI executives, 74 percent of industry-leading marketers reported that they focus on continually improving their multichannel personalisation, whereas only 31 percent of the mainstream do. In other words, leaders are personalising a lot more than the laggards are.
Even compared to last year, there’s been a significant increase in the amount of testing and personalisation that FSI companies are doing. From 2016 to 2017, we’ve seen a major drop in FSI firms that don’t do personalisation, from 12 percent down to just 1 percent. It’s become an inescapable fact that continual refinement of personalisation tactics is the only way to keep winning new customers, and retain the customers you already have.
How do we go about creating a workflow for continual improvement in personalisation? It all starts when you look at your customer data, and find out where the biggest problems lie.
Structuring and prioritising
Contrary to what you might expect, you don’t need to run any specialised tests to find where you need to improve. Start by looking at your existing customer journey stats, and finding the points where the biggest drop-offs occur. At the same time, walk through the entire journey yourself, and perform your own qualitative assessment, keeping an eye out for any unnecessary friction. Back those conclusions up with hard data, and you’ve got a solid basis for improvement.
Once you’ve got a feel for this sort of analysis, it’s time to set up a more formal framework for testing and improving. Develop hypotheses about changes that might streamline the journey and raise conversions, then design and execute those tests on segments of your audience. Although it’s important to prioritise your tests around the biggest friction points, don’t get too focused on a rigid testing schedule. Instead, work on creating an iterative testing methodology that enables you to redesign parts of the journey quickly.
As your learning starts to grow, new ideas for tests will emerge naturally. And the more tests you run, the more important it will become to rapidly coordinate their results into actionable insights.
Moving and adapting
Rapid reporting and adaptation are crucial when you’re running many tests in parallel, because the fact is, most of your tests are going to fail. That’s why you run each test only on a small segment of your audience, using your marketing platform’s “multi-armed bandit” approach to automatically allocate more traffic to statistically successful experiences, in real time.
At Adobe.com, for example, we run hundreds of tests per month, each on a small segment of our audience. About 80 percent of those tests “fail”—i.e., they produce different results than our hypothesis predicted. But that kind of failure doesn’t mean we’ve wasted our time and resources. On the contrary, it means we’ve gained a valuable data point. And because we fail fast, we can quickly reallocate traffic to the winning experience, and continue to optimise.
We’re also using data to understand that different audiences want different experiences. The classic example is the case of the red button versus the blue one. Overall, more people may prefer the blue button, but some still prefer the red one. This doesn’t mean we show everyone the blue button. It means we use data to build “blue button” and “red button” audience segments, and serve the preferred button colour to each segment.
Upshots of optimisation
Like all elements of digital transformation, the learning and improvement process looks different for every organisation. But companies that get it right are seeing stunning growth in conversion rates and revenue, and they continue to test and optimise across all channels.
For example, National Australia Bank recently achieved a 45 percent lift in conversion by analysing performance and optimising page designs on the credit card pages, resulting in 3,000 more applications per year. Along similar lines, Suntrust increased its click-through rate by 47 percent, simply by doing A/B testing and optimising its messaging. Unicredit, meanwhile, increased online acquisition by 60 percent, reducing the cost per lead by 43 percent, by using A/B testing in a continual optimisation framework.
When Sir Clive Woodward, England’s former rugby manager, was asked how he built the team that won the 2003 World Cup, he replied, “It_ was not about doing one thing 100 percent better, but about doing 100 things one percent better_.” In other words, when you’re running these test-and-learn programs, it’s not always about the big wins. It’s about the small incremental changes that you make to continuously drive better performance.
All the components we’ve spoken about in this blog series—data, content, artificial intelligence, and cross-channel delivery—are fundamentals. Optimisation is about bringing all these pieces together. And the final piece, testing and improving, is about making sure you stay ahead. This is an ongoing journey we all have to take, making sure our content is as on-brand and connected as possible, by constantly testing and optimising the experience. Thanks for joining me.