[Guest Post] “Let your clients speak to your Online Help – Google’s Speech API with RoboHelp Responsive HTML5 Output” by Theresa Munanga

As part of our commitment to provide our customers with the best user experiences possible, our company implemented speech recognition with our RoboHelp (2015 release) Responsive HTML5 online outputs. We embrace the future of technical communication, so why not incorporate Google’s Speech API? Even if it’s only available for Chrome browsers right now (version 25+), we can use it and add the others later.

This is a tutorial on just one way to use Google’s Speech API with RoboHelp HTML5 output files. Here are a few caveats:

In this Blog Post

Google Speech API Constraints

Due to the early nature of browser speech recognition, we have to work around the Speech API’s limitations. That means we might have to:

We also need to provide instructions for using the voice recognition feature. This can be in the form of short instructional text at the top of the main landing page, tooltip type hover text, a button for the user to click for instructions, a pop-up message, or something else.


You should be comfortable with HTML and understand at least a little JavaScript before attempting to use this tutorial with your own RoboHelp projects. Also, make backup copies of the files before you start editing, and save them outside of the project folder.

For this tutorial, you will modify these RoboHelp project output files in a text editor:

Tutorial Examples

For an example of a regular website using speech recognition, visit our documentation portal’s landing page at https://documentation.stchome.com. You must use Chrome browser version 25 or later and click Allow when prompted for microphone access. (If not prompted for microphone access permission, check Chrome’s settings to allow pop-ups and microphone access.) Speak the keywords on the page, which are the main words in the links (i.e., Home, Login, Products, Philosophy, Cartoon, and Socialize).

For an example of RoboHelp Responsive HTML help output using speech recognition, see https://documentation.stchome.com/voicedemo/files/index.htm. This opens RoboHelp’s Employee Care 3 sample Responsive HTML output with our speech recognition additions. The keywords here are Contents (or Table), Index, Glossary, Filter, and Search. For the topic titles, each keyword has an asterisk (*) immediately following it. Also recognized are Main (to re-open the main table of contents list) and Stop (to shut off the Speech API).

Tutorial Steps

This tutorial is based on the Azure Blue screen layout, but can be adapted for any screen layout, as long as you can find the navigation link section that includes the link click commands. For Azure Blue, the click command is $mc.toggleActiveTab().

Note about RoboHelp books

If a book is used in the table of contents without a linked topic, the file path (URL) for the first topic in that book should be used to open the book. For example, if the book title is “Travel” but there is no linked topic, and the first topic in that book is “United States,” the JavaScript code for the “Travel” keyword should contain the file path for the “United States” topic. You can either use two different keywords – one each for the book and the first topic – with the same file path, or the same keyword (and file path) for both.

  1. Edit the Topic.slp file for your screen layout so that the ID tags are automatically included in the index.htm file every time the output is generated.

<div class="functionbar" data-css="width: sidebar_width | screen: 'desktop'" data-class="sidebar-opened: $mc.isSidebarTab(@KEY_ACTIVE_TAB); desktop-sidebar-hidden: @.l.desktop_sidebar_visible == false || @.l.desktop_sidebar_available === false; mobile-header-visible: @.l.mobile_header_visible">
	<div class="nav">
		<a class="toc rh-hide" id="vtoc" data-if="KEY_FEATURE.toc" data-class="active: @KEY_ACTIVE_TAB == 'toc'" data-click="$mc.toggleActiveTab('toc')" data-attr="title:@KEY_LNG.TableOfContents; href: '#'"> </a>
		<a class="idx rh-hide" id="vidx" data-if="KEY_FEATURE.idx" data-class="active: @KEY_ACTIVE_TAB == 'idx'" data-click="$mc.toggleActiveTab('idx')" data-attr="title:@KEY_LNG.Index; href: '#'"> </a>
		<a class="glo rh-hide" id="vglo" data-if="KEY_FEATURE.glo" data-class="active: @KEY_ACTIVE_TAB == 'glo'" data-click="$mc.toggleActiveTab('glo')" data-attr="title:@KEY_LNG.Glossary; href: '#'"> </a>
		<a class="filter rh-hide" id="vfilter" data-if="KEY_FEATURE.filter" data-class="active: @KEY_ACTIVE_TAB == 'filter'; filter-applied: @.l.tag_expression.length"  data-click="$mc.toggleActiveTab('filter')" data-attr="title:@KEY_LNG.Filter; href: '#'"> </a>
		<a class="fts rh-hide" id="vsearch" data-if="@KEY_SEARCH_LOCATION == 'tabbar'" data-class="active: @KEY_ACTIVE_TAB == 'fts'; search-sidebar: @KEY_SEARCH_LOCATION == 'tabbar'" data-click="$mc.toggleActiveTab('fts')" data-attr="title:@KEY_LNG.SearchTitle; href: '#'"> </a>
  1. Generate the output using the modified screen layout.
  2. Edit the newly generated index.htm file (or equivalent) to add one more ID and the JavaScript code.
<div class="topic-state" data-class="loading: EVT_TOPIC_LOADING; filtered: EVT_TOPIC_IS_EMPTY" data-if="@EVT_TOPIC_LOADING || @EVT_TOPIC_IS_EMPTY"></div>
<iframe id="vframe" class="topic" name="rh_default_topic_frame_name"></iframe>
<a class="to_top" data-trigger="EVT_SCROLL_TO_TOP"> </a>
<script type="text/javascript">
	(function() {

		// Define a new speech recognition instance
		var rec = null;

		try {
			rec = new webkitSpeechRecognition();
		catch(e) {

		if (rec) {
			rec.continuous = true;
			rec.interimResults = false;
			// In this case, we're using English
			rec.lang = 'en';

		  // Uncomment this function to keep the microphone working 
		  // if using HTTPS. Otherwise, the microphone needs to be 
		  // reset after 10 seconds of silence.
			//rec.onend = function() {

			// Set the confidence level threshold for recognition results 
			var confidenceThreshold = 0.5;

			// Check for the existence of "s" in the string
			var userSaid = function(str, s) {
				return str.indexOf(s) > -1;

			// Process the results when returned
			rec.onresult = function(e) {

				// Check each result starting from the last one
				for (var i = e.resultIndex; i < e.results.length; ++i) {

					// If this is a final result
	       	if (e.results[i].isFinal) {

	       		// Check that the result is equal to or greater than the required threshold
	       		if (parseFloat(e.results[i][0].confidence) >= parseFloat(confidenceThreshold)) {
		       		var str = e.results[i][0].transcript;

							// Write what the computer heard to the console so we 
							// can check it if there are problems
		       		        console.log('Recognized: ' + str);

							// What did they ask for?
							if (userSaid(str, 'glossary')) {
							} else if (userSaid(str, 'index')) {
							} else if (userSaid(str, 'content')) {
							} else if (userSaid(str, 'table')) {
							} else if (userSaid(str, 'filter')) {
							} else if (userSaid(str, 'search')) {
(function() {
    var toc =  [{"type":"item","name":"Projects Overview*","url":"WorkWithProjects.htm"},{"type":"item","name":"Add* or Edit a Project","url":"AddEditProject.htm"},{"type":"item","name":"Project Phases*","url":"Phases.htm"},{"type":"item","name":"Link* Projects","url":"LinkProjects.htm"},{"type":"book","name":"Contacts*","key":"toc3"},{"type":"book","name":"Actions*","key":"toc4"];
    window.rh.model.publish(rh.consts('KEY_TEMP_DATA'), toc, { sync:true });
else if (userSaid(str, '[keyword]')) {
} else if (userSaid(str, 'policies')) {
} else if (userSaid(str, 'attendance')) {
} else if (userSaid(str, 'sick')) {
  1. Upload the output help files and test them. (The Speech API only works when the files are online and accessed over the internet.) To test the voice recognition feature:

What Next?

Congratulations! You’ve incorporated speech recognition into your help files! If you want to continue from here, you can: