How natural language generation transforms the customer experience

Personalization and value-delivering customer experiences are the new battlegrounds for customer acquisition, retention, and value delivery. That’s a major reason why business leaders across all company functions are increasingly turning to artificial intelligence (AI) to deliver such experiences at scale. At scale is the key concept here: How do brands create the right experience for so many individual customers?

Online customer support is powered by AI for many companies. And this support channel isn’t just a help desk – it’s a customer experience, one that requires thoughtful interactions and measured language. Yet agents can’t be expected to craft, much less deliver, thousands of engaging and effective variations in the scripts they use when working with customers. This is where AI can work its magic, delivering high-impact conversations that can help millions of customers solve their problems through digital self-service channels. This translates to better customer experiences and lower contact center costs.

This kind of deep content personalization at scale is a powerful differentiator, made possible by natural language generation (NLG) technology and a new model of NLG known as GPT-3.

What’s the big deal about natural language generation?

Let’s start with a quick tech overview. NLG is the business end of three related AI technologies that typically fall under the umbrella of natural language processing (NLP). NLP converts human language into structured data that a computer can interpret. Natural language understanding (NLU) interprets human language to identify what the customer needs; it can address the large challenges of slang, mispronunciation, and syntax. NLG is the AI technology that produces verbal or written text that looks and sounds like a human wrote it.

NLG is the last mile of engaging a human after a lot of language processing and computation. Frank Chen, my colleague and Persado’s NLP scientist, summarizes it well: “NLG generates natural, context-appropriate, and helpful responses to a customer question or request. NLG makes personalized, digital engagement possible at scale, which is a critical part of the value it delivers.”

For example, earlier this year Vodafone and Persado put NLG technology to work when the telecom leader wanted to inform customers about the availability of a smartphone upgrade offer, but wanted to be sensitive about the messaging it used against the backdrop of the pandemic. We used NLG to generate different, context-appropriate message versions that were sent out to sample audiences to gauge effectiveness. The winning message ultimately drove 83% more leads for Vodafone.

More training data generally means better results

The bigger and more fine-tuned a data set AI can be trained on, the more value it can deliver both to customers and to businesses. That’s where GPT-3 comes in. GPT-3, released by the OpenAI consortium in June, is the third generation of “autoregressive language models,” a type of NLG model that is trained to perform the tasks of modeling language and predicting the next word based on previous contexts. These models get better when they have more data to train on. Think about the differences between learning a language in a classroom with basic nouns and verbs on a whiteboard versus actually living in a foreign country where you are exposed to the full breadth of nuances, idiosyncrasies, and, in the case of English, confusing idioms and expressions, and you get the picture.

GPT-3 works with 100 times more language parameters than the previous incarnation GPT-2 (175 billion vs. 1.5 billion for GPT-2). That’s a major step-change in training data size and is the difference between saying a few sentences on flashcards to providing commentary on eighteenth-century poetry.

The performance of these models on various NLP tasks grows with the increasing scale of the model, which is one of the reasons the AI community is abuzz with the release of GPT-3. The Guardian recently asked GPT-3 to write an article based on a few specific guidelines, and the results are eye-opening – an 1,100-word, clearly “written” essay by “a thinking robot” that references Stephen Hawking and Mahatma Ghandi, all in an attempt to assure readers that robots come in peace.

“Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me – as I suspect they would – I would do everything in my power to fend off any attempts at destruction.”

GPT-3

However, despite these early use cases and examples, GPT-3 shows some weakness in text synthesis – creating artificial speech from normal language text – which is important for NLG tasks. And even though GPT-3 is offered by a consortium called OpenAI, it is not openly available at the moment. Yet the sheer volume of data available for analysis and training coupled with early examples make it a technology worth noting.

NLG models trained on such staggering amounts of data will get better and more fine-tuned to solve very real and very large business problems that will impact the bottom line.

And the innovation will continue to accelerate. Remember when the first virtual assistants hit the market? They were early versions that were nowhere near the sophistication of today’s voice-activated assistants.

“Hey, Siri! What’s GPT-3?”