Creating a Dashboard that Showcases Your Optimization Program’s Value

Image source: Adobe Stock / by Scarface.

by Nicolas Wu

posted on 01-08-2019

To get financial support and buy-in, almost any program must demonstrate its value to the business. This goes for your testing and personalization program too. Quantifying what your program has achieved over time and communicating that to executives and stakeholders is essential. But how do you do that?

Adobe Target consultant David Graves recently delivered a webinar on this topic in a Personalization Thursdays session, and gave great advice on an approach you can take.

An expectation of accountability and future vision

Quantifying your program’s achievements shows you the program results and what efforts add the most or least value. Those answers can help point to how the program can grow and drive additional value. It can even help you develop a detailed 12-month activity road map, including goals to achieve and activities to run. By looking at year-to-year achievements, you should also be able to develop a vision for driving more business value with the program beyond those 12 months.

Yes, you have to capture and document program data

Documenting your program results is like investing in those home maintenance projects like getting a new roof and cleaning the gutters. They aren’t sexy like shiny new appliances and gleaming kitchen countertops, but they give huge peace of mind when that giant rainstorm hits. That’s the kind of investment you’re making when you document your personalization and optimization program efforts. You’ll be glad you did it when you see those efforts displayed in an easy-to-interpret summary dashboard.

So what kind of data about each activity do you need to capture and document? Every program differs, but based on experience, David recommends the following:

At a minimum, document:

Consider additionally documenting:

If you need to review by business unit or program, document:

Document your data consistently, so that you can build on it, compare it over time, and view it from different perspectives. For example, if you use “S” to represent a small-sized activity, where small is defined as “takes less than 7 hours to implement,” use that taxonomy with all activities. The table below gives an example of how you might capture your data.

If you manage your activities in a tool, you may be able to import this data directly into the tool’s dashboard. Many such tools like exist, including Illuminate from Brooks Bell, MiaProva, Basecamp, Podio, Teamwork, Confluence, Jira, and Trello. A recent Personalization Thursdays webinar with partner Brooks Bell on five governance best practices for an optimization program showed how Illuminate can help you do that.

You’ve captured the data. Now what?

Once you’re captured activity data, bring it into a dashboard. Many of the above tools offer dashboards, but you can create your own like the one shown in the webinar. We’ll use it with the data of a hypothetical company to show the value it can deliver. Feel free to use the spreadsheet and dashboard with your own data.

To show program progress over time, a dashboard should let you compare the number of activities launched, revenue generated, and percent of activities with positive results from year to year. Here you see the company’s results from 2017 to 2018. Even with a partial year, the company has earned more annual revenue and had more activities with positive results in the first half of 2018. The lower bar chart shows that the company is launching significantly more activities each month in 2018. Back in 2017, they could have used this information to see the need to focus on increasing activity quality in 2018 due to their low percentage of activities with positive results.

A dashboard can show not only the percentage of activities with positive results, but also of neutral and negative results. The Activity Performance chart above reveals a large percentage of neutral results, an indicator that activity experiences are not dramatically different enough to make either a positive or negative impact. A negative result is also valuable — it shows you what doesn’t work. This company might want to continue focusing on activity quality to further reduce the percentage of neutral results in 2019.

The Activity Next Steps chart shows actions taken on activities — what percent were implemented, influenced making a change or some other action, spurred another activity, or resulted in no action. Although a quarter of activities showed positive results in 2017, only about half of these were implemented. Were activities being done in places where even with positive results the program can’t get approval to make changes? Does the program need a better process for ensuring changes get implemented? The dashboard reveals a need for further investigation.

The Device Type chart above shows the percentage of activities run for specific devices. Here, the number of mobile activities appears surprisingly low for 2017, given that the majority of website traffic came from mobile devices starting that year. This company seems to have recognized that though, ramping up more mobile activities in 2018.

Looking at the Activity Size chart above, this company ran many more “large” activities in 2017 compared to “small” ones. Given that the number of positive results was relatively low in 2017, they might consider that activities are too complex and examining too many elements to get clear results. It seems they realized this — in 2018, they ran many more “small” activities, and many fewer “large” ones.

Finally, the Activity KPI Focus chart can be built from activity results data. This chart reveals if the company is focusing its activities where it intended. For example, if the business goal is to increase AOV, conversion, and RPV, it appears they weren’t doing that in 2017 — they focused on engagement, enrollments, and sign-ins. This showed them the need to shift how and where they run activities to impact their important KPIs, which they did in 2018.

Dashboard differences for testing versus personalization

Collecting data and building dashboards for testing is straightforward, but doing that for personalization activities can be challenging because those activities can vary widely. For example, a company may decide to personalize messaging around offers, with some offers including free shipping. To quantify the value, they may need to back out the cost of free shipping included in particular sales. That’s not so difficult to track.

But what about a company with multiple teams that put different offers on the home page at different times according to a schedule? Offers come and go over time, but especially if it’s a long-term personalization activity that’s doing well, they’d definitely want to track and share the value it delivers. They could do this by adding a line item to the data collection spreadsheet labeled “Personalization Offer XYZ version 1.” When the activity is later updated, it gets named “Personalization Offer XYZ version 2” so the company can compare version 1 to 2.

Pulling it all together with a summary and roadmap

Once you’ve built your dashboard and populated it with data for a given time period, you can analyze and summarize your findings. The screenshot below shows the analysis by the company discussed in the dashboard examples above, based on its 2017 dashboard data.

The company could use this analysis to inform its activities for 2018 and develop main points of their 12-month road map (shown below):

Ready to build your dashboard?

Listen to the David Graves webinar to learn more. Then determine what data to collect, start collecting it, and use it in the dashboard we’ve shared or a project management tool. You’ll quickly see the value your program delivers to the business, learn areas for improving your program, and get quantitative feedback about your program’s growth and maturity. As importantly, you can easily share the business value your program drives with executives and business stakeholders.

Topics: Digital Transformation, Personalization

Products: Target