Taking the Mystery Out of the Magic of AI-Driven Personalization
Part 1: Feeding the algorithm good data for prediction.
Image source: Adobe Stock / Chombosan.
The more we talk about using artificial intelligence (AI) and machine learning in personalization, the more we realize that it’s not this exotic marketing magic after all. In the end, it’s just applying computer science and math to your marketing challenges.
In a recent Adobe Summit session, Adobe Sensei in Adobe Target: Automating A/B Testing and Personalization my colleague and Adobe data scientist Nikaash Puri and I wanted to deepen attendees’ understanding of AI and machine learning. In this post, I’ll discuss one aspect of that session — how to augment the data set available to use in AI personalization, and how Adobe Target gets that data ready to feed our machine-learning personalization models, such as the ones built using the Adobe Sensei-driven capabilities of Adobe Target, Auto-Target, and Automated Personalization. In a later post, Nikaash will provide more detail on how Adobe Target uses AI in activities that use those same capabilities.
Why today’s AI needs data
In the not-so-distant past, attempts at AI involved trying to write code that would capture expertise — perhaps of a doctor, a lawyer, or some other expert. You can pretty easily see that this “expert system” approach is limited by the knowledge coded into it.
Today’s approach to AI, including AI used in Adobe Target’s personalization, relies more on machine learning. Machine learning does not require human intelligence to become intelligent — it automatically learns relationships between data without the need to directly encode them. It uses a set of algorithms to build models of those relationships, which it then can apply to incoming data and new visitors. Some of the most frequently used machine-learning methods include deep learning (a form of neural networks) and Random Forest.
For a machine-learning algorithm to automatically learn relationships between data, it needs to examine a set of training data to identify those patterns and build a trained model. But just feeding your algorithm any old data set won’t necessarily give you the results you want. That data needs to be relevant to predicting or classifying whatever metric you are trying to optimize for, like your conversion rate or revenue.
How Adobe Target machine learning prepares data to build its personalization models
Here’s how Adobe Target uses the data it collects automatically, along with any additional data you provide it, to build personalization models when you are running an Auto-Target or Automated Personalization activity.
First, Adobe Target builds a visitor profile for each visitor. This profile captures environmental, site behavior, offline, temporal, and referrer variables, along with any Adobe Experience Cloud audiences you have about each visitor.
You can also augment these profiles with other data. In fact, you probably have a lot of data about your visitors beyond the information automatically collected through Adobe Target — data from your CRM and your call center, data from other Adobe Experience Cloud solutions like Adobe Analytics and Adobe Audience Manager, and even data shared from business partners or purchased from third-party sources.
The more high-quality data you have access to in Adobe Target to build your personalization models, the better those models will be at predicting your visitors’ behavior, and the higher the likelihood you’ll see lift in your activity. One of our product managers, Ram Parthasarathy, recently wrote a detailed but easy-to-understand post on how to bring data into Adobe Target that’s well worth reading. It will show you that it’s really not so difficult to bring in additional data to augment your visitor profiles in Adobe Target.
Each time a visitor enters one of your Auto-Target or Automated Personalization activities, Adobe Target takes a snapshot of that complete visitor profile and session information. Once Adobe Target has transformed the data into a form that the algorithm can easily ingest through “feature engineering,” it uses “feature selection” to select the features it’s going to use to build the model, removing redundant variables or variables that don’t add much value to predicting your optimization goal. As part of feature engineering, Adobe Target might also remove outlier values so that they don’t unduly influence your model, and it might note missing values in your data.
That’s it. Adobe Target has transformed your data set, and it’s now ready to train and build your personalization model — the topic of Nikaash’s upcoming post.