Adobe DITAWORLD 2020 – Day 1 Summary by Kyra Lee and Grace Smith
by Stefan Gentz
Posted on 10-07-2020
The first day of this year’s DITAWORLD kicked off! It started with bright welcomes of our expert hosts, Stefan Gentz, Adobe’s Senior Worldwide Evangelist, and Matt Sullivan, CEO of TechCommTools. In this blog post, we will sum up the presentations on Day 1 of the conference. We kicked off this year’s DITAWORLD with several information-packed talks. Let’s review the day’s events.
In this post:
- [Welcome Note] Dayal (Adobe): Welcome to Adobe DITAWORLD 2020
- [Keynote] Smid (Acrolinx): Content powers Experiences
- Cacciacarro (Blackberry): From Innovation comes Success
- Aschwanden (Publishing Smarter), Jernberg (Broadcom): Content as a Corporate Asset
- Dybdahl (Adobe), Content delivery: set it and forget it
- Aldous (TheContentEra): The Electronic Code of Federal Regulations
- Wiedenmaier (c-rex.net): Going global? Going global!
- Swisher (Content Rules): Transforming Content
- [Keynote] Sedegah (Adobe): Customer Experiences – Powered by AI
Amit Dayal, Vice President & General Manager at Adobe, started the day off with a review of what DITAWORLD is about. Since 2016, the conference has been capturing the imagination of content creators all over the world. There are over 5,700 registrants for this year’s conference! Meanwhile, the chat is blowing up with attendees from all over the world greeting each other and expressing their excitement.
Amit talked about how the current pandemic is shifting the world online and how Adobe continues to innovate to accommodate the digitalization of our lives. For example, Adobe recently launched Liquid Mode, which uses AI to improve the presentation of PDFs on mobile devices. Many attendees were enthusiastic in the chat at the mention of Liquid Mode and agreed that it is amazing. As for the Tech Comm Suite, Adobe focused on helping teams collaborate easier remotely. For the first time, RoboHelp is also available on macOS and supports microcontent like snippets. Native integration with CCMS like SharePoint is also available for real-life collaboration. Adobe Experience Manager enables comprehensive DITA content management and allows multiple authors and contributors to collaborate.
Amit ended his talk after thanking the sponsors of DITAWORLD 2020 and again extended a welcome to all the attendees.
The CEO of Acrolinx, Volker Smid, started his presentation with the classic trio of “Can you hear me?”, “Can you see me?” and “Can you see my slides?” Ah, the beauty of online conferences. Volker also noted that his son sometimes barges into the room when he works. People empathized with Volker as a lively discussion about the struggles of working from home broke out.
Volker then argued that we should start looking at content as an asset in the enterprise. According to Volker, we consume all content in the same way – regardless of it being marketing or technical content. This is why technical content is a significant part of any customer experience. Although a lot of data was presented at the beginning of the presentation, Volker argued that the presentation is not about the 4.5 billion people on the internet, the 1.7 billion commercial websites, nor the 60 billion pages of content. It’s about how it shows us that content can come in different formats, and they all compete for attention.
This is why businesses have to determine how they measure the value of their content at this age. Not doing it is just not an option. It is also important to create positive user engagement with finable, clear, inoffensive, and emotionally appealing content. Having content that works is crucial for great customer engagement and experience. If you don’t focus on the customer experience that your content powers, you can’t reach business goals!
Your content engages with your audience and is always competing for attention. As a brand, you should focus on making your content findable and useful to improve user experience. Content is a great investment. There are often huge demands, huge consumptions, huge costs but little governance to ensure they improve and are impactful.
Volker quoted Cruce Saunders’s speech about how creating content is not value-producing; it’s only the consumption of content that is. Due to the pandemic, we are consuming more digital content than ever. Content comes in different shapes and forms, but the moment we encounter a piece of content, regardless of what it’s about, we associate it with the larger brand image. This is why enterprises must support writers with guided alignments; reinvest savings to support strategic content creation.
To conclude, Volker emphasized once again that:
- Content is the connection to your audience
- Content has a cost and should deliver business impact
- Content impacts must be governed holistically
- Content creation and maintenance should be strategical and not technical
To learn more, you could pre-register for Acrolinx’s eBook on content audit, contact Acrolinx, or with Volker on Linkedin.
Marco Cacciacarro, Senior Technical Writer at BlackBerry, Canada, began his informative presentation: “From Innovation comes Success”
After just an hour in, there was a brief disconnect. The chat did not mind the short breather to reflect on what we have already learned. Although we may be familiar with what Blackberry was (I always wanted a Blackberry curve growing up!), Marco explains that we may not know the full extent of what it has grown to today. Marco explains their reasoning behind the migration to Adobe Experience Manager and the reasoning behind choosing Adobe Experience Manager over other CMS options.
Blackberry before Adobe Experience Manager:
- 2017 was a difficult transition year for them.
- They were limited by the tech that they did have: an aging DITA CMS and website.
- No IT support available, moving to other areas of the business.
This left them (as Marco put it best), “On a tightrope without a safety net.”
With a great set of writers on board, they understood their needs. Their solution? Find a new CMS that would give them the control they required. XML documentation for Adobe Experience Manager stood out due to their key factors which were the core of his presentation. The biggest advantage of XML Documentation for Adobe Experience Manager is that it seamlessly plugs into the Adobe Experience Manager web platform. The complete CMS gives you what you need to:
- Publish DITA content into different formats
While also being able to:
- Publish a complete website that features your DITA content.
The opportunity to go from zero control over their CMS and website to complete control over both tools.
Marco gave us several informative demos throughout the presentation by screen sharing his live system. The first being the backend to one of the product pages that he owns, created, and released onto the site. The product in question: BlackBerry Protect Mobile for UES. Hovering through the template, he showed us how to see the different web components (button, image, text components, product boxes, and more!). Writers can add and configure these components to build and construct their web pages.
When training their new writers, Marco frequently gives the analogy of “We have all of the Lego blocks we need due to the ease of use with the AEM templates” (This would definitely let me breathe a sigh of relief; who doesn’t love Lego?). With these templates in place, they allowed for complete control over:
- The flow
- Arrange and categorize pages
- Add a link between pages
- Determine their own URL structure, etc.
AEM provides the tools; we, the technical communicators, build our wondrous towers of content!
Here are three AEM features that are easy to use and solve challenges:
Map builder tools are very easy to use. Content can be dragged directly into the map, dropped, grouped, hierarchy can be easily changed for topics, and more!
System offers a sophisticated versioning model. This gives the granular control that you need to capture the state of your content at any moment in time. Whether it is for a current or past release.
Common DITA authoring features are implemented in a Practical and functional way. Some of the newest features include
- Useability feature updates have significantly improved the user experience
- Navigation of the docs’ repository can be done directly within the author (Navigate to any docs or folders and open any topics you require with ease within the same authoring view!).
- Internal links can be added directly and easily to topics through dragging and dropping into an element. Conditions can be applied the same way (Colour coding used for the conditions were noted and well liked by the chat!).
- The right click menu allows you to swap elements easily.
- Error validation checks are very helpful: If you delete a bracket, the system will “scream” at you informing you that something is wrong. Saving can not happen unless the error has been rectified. (Is there any way to get that system implemented into my daily life?).
Marco later walks us through one of Blackberry’s specific challenges with beta docs and adapting to the unique needs of a beta release. He notes the Blackberry’s beta program as the key to the success of their products. However, what happens if significant features are pulled at the last minute before beta start date? Although the dev team can just cut a different code branch, the tech docs may hold the shorter end of the stick (yikes!).
Secondly, the beta customers often want to test new features by running through the same test features as the QA team. Sometimes, they may just follow the test cases instead of reading through the official docs. The answer to both challenges is found through the valuable DITA features and their implementation in the AEM CMS.
One key answer to these challenges is using conditional attributes. One if the feature makes it in on time and one if features are pulled before the beta release. Content is authored for both scenarios. These conditions create a flexible environment regardless of what made it into the beta release. Documents accounted for every eventuality.
Now, was this effective? (spoiler: very!) Although it was a bit of extra overhead, it allowed for maximum flexibility without the wasted effort. Beta drops: remove conditions and adjust the content as needed for final release. Marco continued to provide informative demos that provided great insight to the challenges we may all know too well. This included:
- A demo showing the “Get the PDF” link in online content, giving constant access to full searchable PDFs.
- Navigation bars that allow access to any product from any page regardless of how a customer arrives at it.
- Structured writing that allows for freedom of choice. Marco provided a demo showing a pre-populated Blackberry Task template.
- AEM CMS 3.6s new feature: Snippets which he briefly showed at the end of his presentation.
These are just a few of the many ways Adobe Experience Manager works to encourage the consistent use of DITA. Marco Cacciacarro’s consistent demonstrations allowed us to see first hand how technical communicator friendly Adobe Experience Manager actually is and constantly evolving to be. A content management system both for the technical communicator AND for the product users!
A couple of the notable questions from the Q&A pod:
Hanna Heinonen asked: How much content do you reuse between publications? We try to avoid using xrefs between topics as it would be a reuse barrier for our content. Or do you use conditions for the linking? Marco answered: We do reuse a fair bit of content across publications, but try to enforce some pretty clear guidelines around using defined blocks/segments of content so that you don’t hit problems with broken xrefs.
Fonda asked: What is the review process like using AEM? Marco replied: You have a few options. There is a built in method where you can submit topics via a review task, and an assigned reviewer can add markup via a “review mode” similar to PDF commenting. The writer can then implement the changes via right click accept/reject. The latest version of the CCMS includes a very nice track changes feature directly in the web editor, so we’ve actually moved towards using that for copy editing in recent months.
Jo’lene Jernberg, Senior Manager of Technical Publications and Web Content Strategy at Broadcom and Bernard Aschwanden, CEO of Publishing Smarter were tasked to help Broadcom migrate content to DITA after its acquisition of CA Technologies and Symantec.
Bernard experienced technical difficulties and was not able to join the presentation at first. Luckily, he was able to join so Jo’lene didn’t have to present by herself! It’s fun when a session has more than one speaker and it becomes a discussion. While the slides helped to guide them, Bernard and Jo’lene really had a chance to dive into some details. More than a presentation, this was a chance to hear the good and bad of source content to quality content. It’s not often that you get a chance to hear how content really matters to a company worth billions of dollars!
To kickstart their conversation, Jo’lene gave a bit of a synopsis on Broadcom’s acquisitions of CA Technologies and Symantec. Since Broadcom was dealing mainly with hardware before their acquisition of CA Technology, they had to pivot to a single system that considers and includes software technologies in content delivery.
Not only was Broadcom facing the challenge to migrate a large amount of content (1.7 million pieces!), they had to find a new mechanism to present, publish, and author them. One good thing that came out of these challenges is that they helped ease their acquisition of Symantec that came after. When they acquired Symantec, they had already built a better architecture and structures, and they were dealing with a smaller migration. The tool they migrated both contents to is the XML Documentation Solution for Adobe Experience Manager.
So, what issues did Broadcom face in the past? Jo’lene pointed out a few:
- non-uniformed content (made authoring and editing very hard)
- output generation issues because of the previous overrides
- unproductive folder structures (negatively impacted graphics and slowed generation)
- content in multiple languages
Bernard then took over and talked a bit about the source content from the publishing standpoint. He mentioned that the source files were a mix of legacy CCMS and authoring tools, making structures inconsistent and slowed the migration process. However, the migration went much smoother once they’ve figured out the legacy CCMS patterns. The biggest problem is the lack of standard across different pieces of content. He argued that DITA, because it is not a tool but an industry standard, serves as a superior standard for a uniform content structure.
These are the benefits of DITA:
- a common and consistent look and feel (and easier to change the look and feel across documents in the future!)
- the structures of DITA will always stay the same (regardless of the authoring language)
- security and risk management
Sure, it all sounds good, but we all know how the plan and reality do not always align. For Jo’lene, the biggest question while doing migration is “how do I decide when to move content to a new system?” Her advice is to consider delivery dates during migration scheduling, plan for training, and remember to account for mistakes in conversions.
When Bernard was talking about DITA training with his co-presenter (his beer), a discussion broke out in the chat about how DITA training saves content creators from a lot of pain. To that, he agreed and recommended that we should always always find the most complex piece of content to test scripts on. That way, things are just going to get easier from that point.
According to Jo’lene, cleaning up the content led to a whooping 66% decrease in time needed for generation, saving 266 hours (6 weeks) just in publishing! Imagine working at a big tech company like Broadcom and spending a month and half just clicking file > publish as > HTML5!
Bernard also mentioned the pain of not having a physical space for collaboration. Running into technical issues and feeling frustrated, however, somehow helped humanize remote collaboration in some way.
As Bernard’s engineer friend said, “the best thing about standard is that there are so many of them!” Aside from having nicely migrated, consistent and reliable content. Having now a uniformed DITA structure and streamlined folder structures makes future content migration much easier and seamless. Generations are also smoother as standard DITA got rid of previous overrides. One big advantage of using AEM is that it also allows the content owners to interact with their customers and get direct feedback.
- Preparation and planning
- People (having real-life interactions to build a good working relationship is helpful)
- Tend to details, but think Big Picture
For anyone who wants to connect with Jo’lene and Bernard, simply refer to the QR codes to their Linkedin profiles in the slides. For those who want to learn more about DITA content migration, you can also go to https://www.slideshare.net/PublishingSmarter.
A few of the notable questions from the Q&A pod:
Archana asked: Did you encounter any major roadblocks to migrating source content from another CMS (dita based) to AEM? Bernard’s answer: They didn’t. “That is, we were able to find patterns in the other CCMS. It was based on DocBook though. When people say ‘we have a CMS and we load all our Word content into SharePoint’.”
Archana also asked: Besides Tech Docs, are there any other groups that are using AEM at Broadcom? Joelene answered: Just techdocs at this time.
An interesting point from Uthara Nair: We had to categorize every paragraph into concept, procedure and reference types prior to moving content to DITA. Bernard’s reply: Yeah … When we can, we ID patterns. Even if we go in and tag content with a heading that is a Head1-Task or a Head2-Reference we can then take the “sub” content (until it is in another level of heading) and map that to topic/concept/task/reference.
Chad Dybdahl Solutions Consultant at Adobe, specializes in structured authoring specifically DITA based content management and solution. “Set it and forget it” references Ronco’s showtime rotisserie & barbeque where you put your roast inside, set it, and forget it! Prepare for a rhetorical question packed presentation in 3…2…1.
How many people use the content you create? This may be sort of an unknown, but it’s an important question. How many people do we actually reach?
Chad soon displayed a sea of stickmen on the screen that depicted the consumers along with yourself, which posed another important question. Who are they?
Who is the most important person in this scenario? Answer: the customer.
Without the customer, we lose out on sales, which loses money, and so on. How do we determine who our customers are? Personas may be an answer, but may be inward thinking since it is your impression of the audience.
What devices are used to consume your content? Consider all of the different ways users may decide to consume your content. Users will find it much easier to work through a PDF on a computer as opposed to a phone (has anyone tried opening a PDF on a smartwatch? Ha! Imagine). The differing device sizes must be considered as a technical communicator.
What does your delivery pipeline look like? Is your answer a set of point solutions that have been cobbled together that risk running into issues when considering integration? Or is it a lower tech version where you generate a PDF from FrameMaker that gets emailed to the person that posts it?
Chad displays a shovel on the screen. This shovel depicts a slower delivery of the content. Chad then displays a shiny red tractor on the screen. This represents technology you can use when you need help the most. Although things can be done manually, we must not shy away from the idea of using technology where it can work best. This is definitely something we can all relate to.
Well? There must be a better way, right? How can we display our content better? Luckily for us, there is a chance that one platform may rule them all. One place where customers can easily find and use content on the device that they desire. Drum-roll, please… you guessed it! Adobe Experience Manager. No integrations, no send-offs required, all done under one roof.
Adobe Experience Manager uses two modules that live inside the same tool:
- Adobe Experience Manager Assets (the content repository)
- Adobe Experience Manager Sites (sophisticated scalable delivery platform for hosting content).
But wait, let’s spin back around and answer this fundamental question: who is using your content? This can be found through the personalization Adobe Target provides. Adobe Target provides your analytics through Adobe Marketing Cloud. You can see your most popular topics which you can then use to improve your product design. It’s like an analytics platform and suggestion box at the same time two in one! Score!
Adobe Experience Manager provides all of this in a single fully integrated platform. The only platform that can do all of these things mentioned in a single instance.
Create: Create and manage native DITA content using the directly embedded XML editor within AEM.
Manage: You hold the ability to go through version history, see differences, add labels, etc.
Review: Non-destructive review collaboration where it is up to the content author to incorporate the changes.
Translate: Manage multiple language variants by directly integrating with their translation management service of choice.
Publish: One click deployment (may sound too good to be true, but it’s a reality!).
Another important advantage when using a single platform such as AEM: marketing content. AEM is widely used for marketing content (pre and post sales) using a shared taxonomy.
Chad ends his insightful presentation with: When will the slide show end? Where he then moves onto the question and answers section of the presentation as well as provides a quick demo of the useability Adobe Experience Managers XML editor has to offer. A notable message in the chat wishes for the ability to annotate and capture slides that can be stored in their “virtual attendee account notebook”. This comment accurately depicts the information packed presentation Chad Dybdahl provided us as well as his ability to clear up the same rhetorical questions he raised. Does this mean that the questions were not rhetorical afterall, or is he just that cool?
Thomas Aldous, CEO and founder of The Content Era, gave a great presentation and demo on how he automated the DITA conversion process with XSL scripting. I must note that Thomas has a real talent for making complicated information clear, concise, and much more understandable. As someone with little to no prior knowledge of XSL, I had a hard time understanding his presentation at first but found myself feeling less dazed at the end.
To start, Thomas gave a brief introduction about his project and experience with the GSA (General Service of Administration). GSA is an independent agency of the United States government to regulate how to acquire goods and services. Thomas worked with the team at the GSA that manages the document called “Federal Acquisition Regulation (FAR)”. That content was originally in unstructured FrameMaker, and Tom and his team spent four years converting it to DITA/XML. According to Thomas, having the content in a DITA architecture makes it so much easier for future content migration and editing.
A little back story about the content: it is required by law that all US federal agencies have their regulatory content available in the electronic code of federal regulation (eCFR). The good thing about it is that there is a compiled version of all the regulations used to contract with the Federal Government, and they are in a simplistic structure that is easy for writers to work with regardless of the format. All the different Federal agencies submit their content in whichever format they used to author it, and they all eventually get converted to a custom eCFR XML structure. But why the eCFR structure? Well, according to Thomas, it provides a much better solution than managing unstructured or MS Word content. It gives the writers a lot more flexibility on customizing things like overrides and metadata. As he put it, “once you master creating XSL scripts, you can do pretty much anything with your XML content”.
As for the scope of the project, they dealt with hundreds of thousands of pages of regulations. All regulations are broken down into 50 titles that work with different groups. Thomas and his team worked with GSA and title 48. The GSA posted some of the regulations from the eCFR to their Acquisition.gov regulations site. This site has a modern architecture that uses DITA as the source for the main regulation documents. For the files they grabbed from the eCFR, the GSA relies on TCE to convert the content to DITA and enriched the metadata so it would fit into the Acquisition.gov framework. This process is much easier and faster than migrating content directly from MS Word! Comparing eCFR to DITA content used to be manual and time-consuming, but it is now automated.
Thomas then gave us a live demo of how the conversion process is carried out. Before the conversion, he was dealing with 92,000 lines of code with no DTD or unique IDs. To solve that, he had to build XSL files that would automate adding DTD and assigning unique IDs to create DITA content and maps. To add more intelligence to the content, he also had to run a series of XSLs to convert and enrich the content with xrefs and metadata. For anyone who is new to XSL like me, Thomas mentioned his 5-part XSL training class. Go ahead and Google it if you are interested in learning more.
According to Thomas, all of the DITA content and published outputs are actually living in GitHub. GitHub is actually acting like the integration between any DITA authoring tool and the Acquisition.gov website. The best thing about it is that you could share and grab the content through a web browser. The good thing about DITA is that it is a standard and you can author it with any tool you want. It frees the content from the tool!
His presentation ended with the importance of XSL in automating the transformation, merging and filtering of XML content. He argues that XSL knowledge is extremely important if you want to be a DITA expert. So, there you go! Learn XSL if you want to be a DITA master.
Since his presentation ended with 20 minutes left, Stefan and Matt prompted Thomas to talk about his experience with FrameMaker. Thomas was more than happy to talk about FrameMaker because, according to him, it is his favourite editing tool. He likes FrameMaker so much because:
- FrameMaker dynamically drives the formatting of content, keeping all DITA content living in the repository pristine
- It allows you to configure DITA preferences
- It is very user-friendly
- It makes converting content from Word and HTML to DITA very easy
As a side note, Thomas also mentioned that FrameMaker 2020 opens and processes his 3800 topics much faster and more responsive, compared to the 2019 release.
Markus Wiedenmaier, CEO at c-rex.net in Germany begins his presentation “Going global? Going global!” by introducing himself and c-rex.net.
What is XLIFF?
XLIFF is an XML localization interchange file format. It is:
- An OASIS standard
- Source-format independent
- Interface format for CAT tools (most CAT tools support XLIFF)
- Bilingual format
- First version (1.2) in 2008
- Current version (2.1) in 2018
Markus’ agenda was impressively worked through as follows:
Background on XLIFF translation
Markus went over the top 5 reasons why companies invest into translation, to
- Improve customer service
- Increase brand value
- Comply with law and regulations
- Expand market share
- Grow revenue
Saving money (translation is expensive!)
- Cost is not because translation is getting more expensive, there is just more content to be translated (doubling in the last 10 years and still increasing!).
The current translation process begins in house. Content within a repository that is zipped and emailed to the translator. Then, the translator begins their job of importing the content to a CAT tool, converting to an internal format. The content is pretranslated, post edited, translated, and exported back to the file system (converted to DITA and sent back to the content owner). This is all done by the language service provider. The content is again sent back in house, imported to the repository, reviewed, released, and published (Is anyone else’s head spinning? Wow!). Although this process does work, you are:
Not the owner of your content
Not the owner of the proces
Not free to change
- The tools
- Data structure
- Service providers
Not able to automate translation
The translation process with XLIFF begins the same way, inhouse, except it is converted to XLIFF and sent to the LSP automatically. Instead of having the configuration process begin at the LSP, it now begins in-house (very cool!). Content is then translated by importing XLIFF, going through the pre-translation/translation/post editing process, updating the TM, then exporting XLIFF. This is a much more compact process.
Again, the XLIFF files are sent back inhouse and import DITA to the repository (GIT, bitbucket, CCMS), released, reviewed, and published to what is required. This process is not only much more compact, but also capable of becoming fully automated (very, very cool!)!
This process through XLIFF allows you to:
- Be the owner of your content with
- Control over the process
- Saves money
- Possibility to automate the translation process
Markus urges technical communicators to check where you require 100%. “Not every content needs 100% quality and not every language needs 100% quality.” What about machine translation? Check how metadata can help your process save costs! Think: Is reuse in translation necessary? These are questions worth thinking about.
Metadata is your key to success: Markus uses a great graphic of a key connected on a computer system with text: Thinking in Metadata IS content strategy.
XLIFF with Framemaker 2020:
Markus provided us with a XLIFF demo with Adobe FrameMaker 2020 where we were able to see the translation process from English to German.
- 1. The DITA sources are opened using an ITS-Processor or SRX-processor
- 2. Exported to the XLIFF source using the XLIFF export settings where you can choose to apply XLIFF segmentation rules or not.
- 3. The XLIFF source is sent to the CAT tool.
- 4. Sent to the XLIFF targets.
- 5. Then imported to be opened in FrameMaker, then imported to the DITA targets.
XLIFF generation is accessible out of the box by selecting it during installation.
- ITS Rules for DITA 1.1-1.3 are completely pre-configured.
- Highly configurable through XLIFF translation settings (XTS)
- Completely works on standards
- MIF translation is also integrated.
- XLIFF is an SRX at a glance. From XML to XLIFF we use XML and ITS rules that are taken, put into a processor, to create an XLIFF file for translation.
- The second step takes the XLIFF and SRX content, uses a second processor, and creates a segmented XLIFF file.
- After translation, the XLIFF content and the skeleton (the shadow data), merged using a processor, to create our XML files for publishing.
- The skeleton can actually be embedded into the XLIFF file or left externally within the XLIFF export settings.
Markus concluded by reiterating his points including the XLIFF processes allows you to regain your ownership for your content with control over the process. XLIFF is a powerful tool that provides a compact process saving the valuable assets we all know and love as technical communicators: time and money.
A few of the notable questions from the Q&A pod:
- Nora Smart asked “Does the new XLIFF support work with unstructured FrameMaker?” Answer: Yes! The new feature video Matt Sullivan did on XLIFF can be accessed here.
- Liz said: “I have xml files that represent software configuration tables I need to document. How could the xml be converted to dita/structured FM?” The answer: You would write (or contract for) XSLT to process the XML to DITA and then back again.
- Another question by Liz: Is it possible to use scripts to create content rather than having authors selecting content automatically? e.g automate content inclusion for different products and versions? The answer: Yes! With sufficiently intelligent content — AEM is an API first platform and could be extended to achieve this.
Digital content transformation should create a completely new and improved customer experience with your content. The CEO of Content Rules, Val Swisher, has a highly contagious energy and gave a useful and entertaining presentation on how to breathe new life into legacy content. Attendees all agree that this presentation is highly useful and digestible for both new and experienced technical communicators. So, let’s jump in and review what her presentation is about.
According to Val, she often finds herself talking about digital transformation with almost every customer she meets. Every customer (both technical communication and marketing communication) these days wants to talk about digital content transformation. Why so? What is digital transformation? Val argues that digital content transformation could be boiled down to “the creation of new customer experience with the content”.
Like many technical communicators, the first thing she thinks about when considering transformation is “tool”. However, tools do not create a new experience. In Val’s words, “if you put crappy content into new expensive tools, they would only become expensive crappy content”. New tools without new content are not going to create happier customers. The bottom line of digital content transformation is that it makes content more versatile and reusable. It should move content from being “one size fits one” to “one size fits many”. The transformation should be about re-imagining the legacy and new content.
People are always learning new ways of developing software, so why not new ways of writing? So how do we transform content digitally?
There are four steps:
- Locate ALL content
- Decide which content to transform
- Determine how to transform
- Do the work
Val then gave a very comprehensive overview of how to develop a transformation strategy. When you are locating your content, you must find ALL your content and create a content inventory. Then, you should select what content to transform, which pieces to convert, convert and rewrite, or leave as-is. So how do we decide what to transform? You should consider the value and popularity of the content. The bottom line is to end up with chunks of reusable and valuable content to maximize their value! As for the transformation method, Val recommends that we convert content to a different format, rewrite to improve it and standardize terminology, style, voice and tone we use in our content. This is the only way for us to successfully transform the user experience as they engage with consistent content.
“But how are we supposed to do all this work when we have no time?”
Val argued that although most people feel like they have no time to do this, it is very important to do the work because your content must be structured to deliver personalized experiences and train future AI to deliver the content.
In the next section, Val gave a comprehensive list of recommendations to improve content transformation:
- Use discreet content types
- Make sure the content piece is big enough to stand alone and tells a full story, but small enough it does not speak a word more than that.
- Use standardized terminologies and titles, or your customers cannot follow along with you.
- Avoid using dependent language like “previously” and “later in this document” to ensure they stay reusable. This is the key to breaking away from monolithic content!
- Follow structured authoring guidelines
When Val was pointing out the main problems with content transformation in most companies, the chat was very lively as attendees agreed with her. Turns out this is quite a universal experience! One attendee reiterated what a lot of us in the audience felt throughout Val Swishers presentation: “I feel she is going point by point all through all the problems we are facing right now in the company I work for in relation with documentation!”
Overall, the chat was especially lively during this session and we all really enjoyed Val’s useful presentation and sense of humour!
Elliot Sedegah, Group Product Marketing Manager at Adobe gives the closing keynote for the day: Customer experiences – powered by AI. One of the hot topics Elliot wanted to talk about during the keynote was artificial intelligence. Only 33% of consumers think they are using AI-enabled technology. However, the reality was the majority (77%) of consumers interact with AI-powered technologies every day! This number would be even higher if checked today.
2.5 trillion PDFs are circulating worldwide. How can this be transformed to AI? We can have the AI read the content automatically and create a much more elegant experience through mobile (mobile viewing is where the majority of users are comfortable viewing PDFS).
Elliot cites that 59% of business and technology leaders say improving the customer experience is a top motivator for deploying AI (Gartner).
However, many AI/ML projects fail to take off at many different points. But why? Key reasons include:
- Data quality
- Lots of time is spent cleaning the content for the right AI
- The right personnel may not be available in-house
Even when implemented and an AI presents the content are we able to explain the outcome of that result?
AI/ML is a critical mandate to designing exceptional customer experiences. We know that AI/ML is important, but we do not know where to start. The answer: Adobe Sensei has us covered. Adobe Sensei is the technology that powers intelligent features in our products.
Customer experiences are getting difficult to manage at scale. So how can we intelligently tap into artificial intelligence (Clever right?) to bend the curve of the cost below the budget?
How do we tap into the power of AI for CX to
- Embrace creative intelligence
- Leverage content intelligence
- And Unlock intelligent experiences?
These are key points that Elliot covers throughout his presentation.
Embrace creative intelligence
- Creatives do not enjoy mundane and repetitive tasks.
- There are also things that everyday people also don’t enjoy: ANSWER
- In order to accelerate digital transformation, organizations need AI.
Leverage content intelligence
- One of the key things we think is “Can i make this block of text, image, or video smart?”
- Content can find itself. Content should be exposed to the right person, at the right time.
- Content search and discovery must be fast and specific.
Looking at the AI timeline:
1950s -1980s: Early artificial intelligence stirs excitement.
1980s – 2010s: Machine learning begins to flourish.
2010 – present: Deep learning breakthroughs drive the AI boom,
The old approach in image recognition was difficult and less accurate. Elliot gives the example of having to tell the system what to look for: two eyes, a nose, a mouth, teeth, and two ears. A second model is shown of a woman where her ears are hidden by her hair. The system: Yeah uh… that’s not a face. Sorry.
Deep learning is more accurate and can generalize concepts.
The instructions to AI platform:
1. Here are many different faces.
2. Teach yourself to see the face.
This process allows the AI to see the face of the woman with long hair and think: You know what? This is totally a face! This process allows facial recognition for images to get smarter.
But wait, it gets even smarter! Product categorization and logo detection can add enterprise value. This type of technology and an understanding of content intelligence can be applied towards new and interesting news cases.
Color extraction can help a CPG company to ensure that digital assets have at least 25% of their images feature the product.
Unlock intelligent experiences:
We’ve been in many situations where we have a lot of assets, content, people, etc working to assemble and repurpose content. Without that “army” it is not possible to (…). Content must scale with the acceleration of experience making. Looking at the content, we start to think of all the different form factors. Do you love gadgets? (I do.) With all of these new devices coming out, how can we ensure that our content will be accessible through them, too?
Enable experience optimization for mobile with smart cropping: We can’t know all of the specs of the new product. How can we make sure that we aren’t going back to the creative team later on? Smart cropping works for both images and videos by detecting and cropping the focal points to ensure this does not happen.
How can we facilitate content to create more engaging experiences? Elliot uses a retail example:
Visitors visit a retail website where they often click and check out a pretty flowery dress they like. By utilizing:
- The color composition
- Keywords and entities
- Product categorizations
- Specific Features (such as pattern, design, etc)
The retailer can show visually similar content (or in this case, flowery dresses). This can also be used with documents. Smart tags for text can automatically add topics to a site, enhance content search, and help consumers discover relevant articles.
So: how do we use this? how do we leverage this same technology for our XML technology and use this as part of our content authoring experience ? Answer: Through AI-powered content reuse for XML documentation (Beta).
AI powered chatbots for self-service support workflows: 91% of organizations are planning to deploy AIl within the next three years! The trend of customers assigning their endless digital activities to their virtual personal assistants, chatbots, and other self-service tools will grow over the next 10!
Throughout Elliot Sedegah’s explanation and examples of AI capabilities, the chat exploded with very impressed professionals! Andreas Kempinski notes: “Very inspiring! Gives us a lot to think about!”
Elliot reiterates the fact that AI is everywhere we look: we must consider how we can actually use this technology to target real customers. We must change the way customers interact with our product, and how we work.
How’s that for Day one? I feel like we have written a television season’s worth of content already! Day one was filled with insightful and challenging information that we as technical communicators can definitely appreciate the speakers tackling.
Stefan ended off by thanking all of the amazing presenters for sharing their great stories, experiences, wise words, starting inspiring discussions and more. All of this information left the entire audience with many different questions and excited for more! Stefan quickly goes over the agenda for day two, and it sounds like more interesting presentations and keynotes to match today are coming our way!
Please join us in getting pumped for tomorrow where we enjoy season two day two of Adobe DITAWORLD 2020!
Topics: Technologies, TechComm, TechComm Import