Understanding Video Measurement Implementation
This post is part of an on-going series on video measurement tips, tricks and best practices.
Though it is often viewed as a roadblock, implementing video measurement does not need to be a barrier to success. The most important concept in video implementation is that video measurement is all about the video player. The player will determine how you will implement, what you can implement, and even where the code will be placed. In this post I’m going to provide a general overview that should work for most video players. In future posts I will cover instructions for implementing video measurement for specific video players.
Do you know what video player is on your site? An easy place to start is by opening a web browser and playing a video. While the video is playing, right click within the player. You should see a menu of options that can tell you a lot about your player technology. If you see options like “About Adobe Flash Player”, it means this player is using Flash technology to display the video. If you don’t see anything about Flash, then there is a good chance that your player is utilizing HTML5 technology. The options may also include a video vendor name like Brightcove, The Platform or JW Player. Note down your findings so you can refer to them later.
If you are just getting started with video on your site and have not settled on a video player vendor, remember to find out what analytics integrations are supported and what data points are available when evaluating different vendors. Most modern video players are easily integrated into the Adobe Analytics tools.
During this post I will be using a sample player that can be found here: http://marijka.host.adobe.com/video/html5-blog.html This very basic HTML5 video player will play videos on most devices with a web browser.
The Video Player
**Before I dive into a code discussion, let’s take a step back. Why does the video player matter so much? What about the videos themselves, or the device, or the app? Aren’t all the pieces important?
Indeed, all the pieces matter, and the diagram to the left shows the technology layers involved in video display.
Device: The physical machine that runs computer programs. It includes desktop pc, mobile phone, tablet, game console, set-top TV box, etc.
Site or Application: The basic program, which may be a web browser or stand-alone application, such as a mobile app.
Video Player: Code that displays video assets and provides a user interface for video playback. The code is designed to work with specific applications and devices, making it “aware” of the environment in which it is placed.
Video Asset: Individual video files, usually stored in a video asset management system. Video assets themselves do not contain any metadata; they are simply raw files. Metadata about video assets is stored and associated through asset management systems.
Everything we want to measure revolves around the user interaction with the video. The video player controls how the users view the video. In order for the video player code to work, it must interact with all the layers of technology in the diagram. By utilizing what the player already knows, we gain access to all the data we need about where and what was viewed. The video player also controls playback so we can capture exactly when a video plays. The video player is really the brain of the operation and this is why the measurement code needs to work with the video player.
There are four technical components to implementing video analytics; the measurement plan, the measurement library, the player mapping, and the variable mapping.
The measurement plan was covered in the previous post. If you haven’t designed a measurement plan yet, it is the first step to video implementation, because it lays out the key data points you want to collect for video. For the sample video player, I’m using the measurement plan from the previous post, plus a few additional data points, and for simplicity we are just going to examine the long-form player. Let’s take a quick look at the table below.
The blue and green column values are mapped to specific variables using the variable mapping code. The six green columns (video name, page name, athlete name, domain, player name, and length) are set in the player mapping code. Data for these types of variables often comes from metadata found in the video asset management system or in contextual data from the app, site or device. The blue columns are variables automatically set when you choose to track and segment by milestones, which is the recommended method for long-form videos. These variables are controlled by the measurement library.
Once the measurement library is in place you can map your video player actions to the library functions; this is the player mapping. The measurement library is looking for some key points within video playback including open, start, stop, close, scrub start, scrub stop, pause start, pause stop, video name, player name, and video offset. Your video player may use different names for these video points than the video library, which is why the player mapping is needed. For instance, in the sample video player, I have mapped the HTML5 video points to the video functions expected by the Media Module video library. You can see that video complete is called “ended” in HTML5 and is called s.Media.close in the media module library. To see this code, view source on the html page.
After you’ve mapped the player names to the measurement names, you then need to map the measurement names to specific Adobe Analytics variables. This happens in the variable mapping code. There are three options for variable mapping; in order from simple to complex, they are basic media tracking, media monitor custom tracking and full custom tracking. Determining which mapping you need depends on what data points you want to capture and whether the measurement library supports the tracking you require. Basic media tracking supports a limited but essential set of data points including milestones or seconds, video asset name, segment name, and time spent. If you need a few additional data points, such as athlete name and parent page name, then add some media monitor functions to your mapping; examine the sample s_code.js to see how I’ve used media monitor. If your measurement needs are highly customized it may be easiest to implement a fully custom tracking plan, though custom tracking is out of scope for this post. Adobe Consulting or Client Care can point you in the right direction for custom implementations.
Once you have all your code pieces in place, it is time to test your tracking calls. The easiest way to test is to compare the rows of your measurement spreadsheet to the individual tracking calls fired from your video player. To see the calls being fired, use a packet sniffing tool like Charles, Fiddler, HTTP Scoop, Firebug, or the Chrome Developer Tools. In the packet sniffer you should see each video milestone sending a call and you should see each variable equal an expected value. Give it a try with the sample player. For video start you should see something like this:
The Data Collection
After testing with the packet sniffer, it is time to verify the data is reaching SiteCatalyst and to set-up your video variables and reports. In the next post I will walk through the basics of video reporting set-up and analysis.