Adobe DITA World 2018 – Day 3 Summary by Danielle M. Villegas

Hello everyone! My name is Danielle M. Villegas, and I’m the “resident blogger” at Adobe DITA World 2018.

Welcome to Day 3—the last day of the Adobe DITA World 2018 conference! I must admit, there’s been so much going on that these three days did fly by. That’s usually a good sign that the presentations are interesting and motivating!

Day 3’s general theme seemed to be about looking to the future, and understanding how to put together all the pieces we’ve been learning about in the past couple of days into perspective. Some specific strategies included how to optimally use FrameMaker 2019 and Adobe Experience Manager today to meet your specialized needs, but also how to use them with other tools like Adobe Sensei, or how to bring these concepts into everyday processes and operations were put forth today by masterful presenters who know how to break it down for easier understanding.

Stefan Gentz did announce that unfortunately, Amit Siddhartha, CEO at Metapercept Technology Services LLP, could not give his presentation, “Content Strategy using DITAVAL: How the DITAVAL Concept can help planning a good Content Strategy,” as scheduled as he had a personal emergency. Fortunately, Stefan came through with a great backup plan which you can read about below!

In this post:

  1. [Keynote] Dustin Vaughn: “Hello Future”
  2. Rahel Anne Bailie: “Content Operations: The Missing Link”
  3. Jang Graat: “Why everyone should customize DITA and how to do it easy”
  4. Maura Moran: “Taxonomies, semantic tagging and structured content – better together!”
  5. Val Swisher: “The Holy Trifecta of Content”
  6. Aldous/Aschwanden/Gentz/Sullivan: “Ask the Experts”
  7. Robert Anderson: “DITA 2.0: Strategies for your Content’s Future”

Hello Future!

How Artificial Intelligence and Fluid Experiences Deliver Next-Gen Content

Adobe DITA World 2018, Day 3: Dustin Vaughn

The keynote speaker for Day 3 was no stranger to technical communications, DITA, Adobe FrameMaker, and Adobe Experience Manager. If anything, he was probably one of the most qualified people to talk about how to bring all of those elements together.

Dustin Vaughn is currently and Adobe Experience Manager Specialist Solutions Consultant at Adobe, but he used to be a Solutions Consulting Manager for Adobe Technical Communication for many years. Having familiarity with both Adobe Technical Communication Suite products and Adobe Experience Manager makes his highly qualified not only to have a deep understanding of how these Adobe products work well together but also how users can use them for the best experiences possible.

Dustin’s talk, “Hello Future! How Artificial Intelligence and Fluid Experiences Deliver Next-Gen Content,” kicked off discussion exactly that topic—what makes the best experiences not only for users of these tools but also for the end experience for customers? Dustin feels that looking into the future at Artificial Intelligence (AI) and fluid experiences will deliver the next generation of content.

He began with this statistic, which was that business buyers do not contact suppliers until 57 % of the purchase process is complete. Their expectations are that they want consistent experiences, no matter the device; and the ability to do more with less or the same, but this is not realistic or possible. We must find efficiencies and better ways to do things.

He continued to say that in this day and age of a continually changing world of digital experiences, experiences matter more than ever because there are more expectations from devices that are going to accelerate with the explosion of data, and we need to make both work together. Experiences start with great content—a concept that tech comm has understood for a long time. People buy experiences, not products. Experiences are powered by data, but this is where tech comm lags a little bit. It’s difficult for teams to gather the right data, as well as get a complete view of customers so we can deliver experiences to delight. That can’t happen without deep technology.

Dustin quoted an MIT/Sloane 2017 study said that 85 % of executives think AI will obtain or sustain their competitive advantage, and three out of four executives believe that AI will enable their companies to move into new businesses. Of those who were surveyed, only 40 % of companies had an AI strategy in place. Another study done by Forrester Research stated that by 2020, businesses will take $1.2 trillion each year from competitors that don’t employ these technologies.

With that in mind, Adobe is doing with AI/machine learning to increase efficiency keeping in mind that they want to merge the art of content with the science of data. Adobe Sensei is the company’s AI platform with the goal of doing just that. AI can work with AEM (and subsequently DITA files) by helping to automate the management, assembling, personalization, and delivery steps in content management.

Sensei is purpose-built around creative intelligence (geared for graphic designers, generate renditions, etc.), experience intelligence (delivering to the right person at the right time and understanding customers, what engages them), and content intelligence (document structure, understand content, faster findability and boosting productivity). The main users of Sensei features, services, and framework would include practitioners and creators, developers, and data scientists. Sensei can leverage your own data models. While Sensei is independent of Adobe Experience Manager and is a standalone product, it integrates well with it.

Dustin proceeded to show demos of how Sensei works within AEM, giving us three different examples. The first example was a “smart crop,” where photo assets could be automatically cropped to a given size of something (like social media, FB), but still be flexible to allow a designer to customize it as needed. (Less time tinkering in Photoshop, that’s for sure!) The second example featured a tech summarization bot that could generate something similar to a short description used in DITA for a given topic or longer piece of content. It would have the ability to summarize the text (even with a user designated word limit), analyzing what parts are important yet could still be tweaked and customized. The third example involved smart tags, which would be an automated process that could create tags that would appropriately reflect an uploaded photo asset to make it searchable. The user could add more tags as needed for that asset. The AI can be trained to make custom tags as well, such as training it to include your company name in the tags if your corporate logo is within the asset. Some ideas on how AI could be used for other tasks that Dustin thought of included a bot that can figure out automatically related links, and a bot to do video cropping as appropriate for certain formats, like scene breaks or aspect ratio changes without affecting the focus of the video.

Fluid experiences are needed to increase reach. “Fluid” is an Adobe term, but it’s relevant as omnichannel experiences are more important than ever. 83 % of consumers stop engaging altogether when they had a poor experience, and 54 % are less likely to make a purchase if the content is not contextually relevant. Consumers expect contextually relevant experiences. Creating omnichannel, personalized experiences at scale can be challenging. The difficulty lies with the lack of scalable, digital infrastructure, inefficiencies in creating content for multiple channels, personalizing content at scale, and a lack of agility in content updates.

To add to those difficulties, technology can have its own difficulties. (Ironically, at this point, Dustin Vaughn encountered some of his own technical difficulties due to his Internet connectivity, but I was privy to his slides, to fill in some of the missing pieces here.) Creating fluid experiences is about creating the foundation of a hybrid foundation of content fragments, experience fragments, XML content, and content services. Combining these elements with the help of a content management system like Adobe Experience Manager can enhance the ability to handle content easily exposed to channels without developers or the need to use extensive code yourself to implement.

There’s already an open-data initiative with Microsoft and SAP to extend this concept of open data within AR. Adobe Sensei is about automating tasks, but it is focusing on and purpose-built for the creative process.

DevOps, DesignOps, ResearchOps – And now ContentOps?

Adobe DITA World 2018, Day 3: Rahel Anne Bailie

The second session of the day was given by Rahel Anne Bailie, Chief Knowledge Officer at Scroll, UK. If you’ve ever been fortunate enough to see or hear Rahel present before, you know that she’ll take some complicated concepts and simplify them in a way that makes it easy for anyone to understand. Today’s presentation was no exception as she talked about what she considered the missing link—content operations.

Rahel started by showing and highlighting the main features of content operations based on several definitions, including that content operations encompasses processes and systems efficiently and effectively produce and distribute content; they achieve consistent quality at scale; they are content manipulation and analysis; they are processes, structures, scale, and technologies’ optimization; and they comprise of infrastructure and processes to create content across organization. That said, they are not the same as Content Strategy!

The significance of operations is that it becomes the longer part of a process—a long and lasting phase after a strategy and implementation have yielded to the operational phase. Rahel contends that ContentOps differ from other operational phases, as DevOps is about automation and monitoring, DesignOps is about creating activities with minimal friction, and ResearchOps is about reducing inefficiencies, scale, making things repeatable and reliable.

ContentOps, by her definition, is a new term, defined as a set of principles that results in methodologies intended to optimize production of content, and allow organizations to scale their operation while ensuring high quality at delivery time, to allow for the leveraging of content as business assets to meet intended goals. They are similar drivers to the other ops, but the practices are different. Rahel invited others to join in the conversation through the Slack channel she’s set up. (If you decide to join, just tell her that you heard about it here!)

Rahel pointed out that ContentOps can be as different as your content strategy. Content should be planned out in advance for best flexibility and should be highly semantic for auto-aggregation from diverse sources (she gave a few examples of this from projects she’s done). It’s important to pinpoint business opportunity for personalization, mine for “gold” in the mountains of big media, and operationalize marketing content.

This can’t always be done, as typical content production isn’t really a system, rather, managing the publication is really the operation. Copy and paste is not a system! The biggest barriers are ultimately humans, not so much the technology.

The recommended adoption of ContentOps would follow these steps:

  1. Content Strategy (requirements, then gap analysis, then a roadmap)
  2. Implement (Assess, then install, then configure. If the configuration isn’t done right, the next step won’t work well.)
  3. Operationalize (Train, use, and iterate)

Rahel’s quoted technical communicator Kirsty Taylor when explaining rationalizing content operations, saying that professional want a system that is repeatable and automated, allowing writers to use their expertise on highest value tasks. They also want translations built into the process, scalability, and an infrastructure that is ready to add a new product or team in the process.

Toolkits are not just the tools, but how you use them. Your tools should enable you to provide semantic content structures, a power authoring environment, components and transclusions, workflow management, translation management., digital asset management., and the ability to deliver to downstream systems.

But here are the caveats:

Ultimately, ContentOps means working smarter. Costs go up with content delivery, content creation, and especially content maintenance. ContentOps means climbing up the maturity ladder and bring others onboard in stages or steps—no big jumps or leaps.

Rahel Anne Bailie’s final thoughts were questions she asked attendees to consider: What would ContentOps look like in your organization? What would you do with all that freed-up brain power? What would it take to make it happen? Share your ideas on Twitter using the hashtag, #ContentOps.

Why everyone should customize DITA and how to do it easy

Introducing the world’s first DITA Customization Wizard

Adobe DITA World 2018, Day 3: Jang Graat

The topic presented in the next session was something that I had heard about from the presenter about a year ago, and now he was able to debut it in its full realization. Jang Graat, CEO at Smart Information Design, felt that a big part of making DITA more accessible to technical writers was that it needed a customizable wizard in the software used to design DITA documents. Let me summarize Jang’s rationalization that he shared here.

Jang began by explaining that DITA is known for its ability to adapt to special conditions. Even as you read this, DITA is applied to more and more business domains with its own lingo, which in turn evolves to a growing number of elements being added to the list as element tags, and it will keep growing. However, more elements can mean less usability, as it can get harder to use because you don’t know which tags to use of the 650+ elements available in FrameMaker 2019. The result of this is less reusability. Using all of DITA is too much. DITA 1.2 allows for constraints mechanisms. So, the idea would be where you can create a subset that is perfect for your business domain and remove unwanted elements through filtering; you could make optional elements mandatory for functions that work like a checklist, and always use the same order as you would require.

How would that work? Modifying DTDs from DITA 1.0 to DITA 1.2 is not a good idea, as one typing mistake could goof everything up. DITA 1.2 has RelaxNG, which is a true XML format. It’s easier to understand and looks similar to XML/HTML tagging with RNG files.

Part of the problem is the configuration in the content model. When we don’t know what this functionality has, we highlight it in some way, like make the text in bold. We should really be thinking about the functionality of highlighting, and learn to think more in functional approaches—why italic or bold?—to give true semantic meaning. Stick to fundamental core ideas—take it out if you really don’t need it! Taking those elements out highlights and overrides the main definition.

Constraining an element content model helps your authors like a checklist. Again, if you remove unnecessary elements, like using <shortdesc> instead of having that AND <abstract> for a shorter document versus a longer doc; then a constrained content model is much easier to read and make authors happier, readers happier, and easier to use.

So, this is where Jang’s realization came to fruition. Jang showed the Customize DITA Shell “wizard” for FrameMaker 2019 to help formulate those constraints which he showed. For example, for a <title> tag, there are about 40 possbile child elements in DITA. With the Customize DITA Shell wizard in FrameMaker 2019, you can easily take out this complexity by just unticking checkboxes, and create your own customized constraints.

Jang demonstrated how it works by accessing from the FrameMaker 2019 toolbar: Structure > DITA > Customize DITA > Initialize Plug-in. Once the plug-in is initialized, you can pick and choose which elements you want to use—it takes over some of the editing. This allows only certain elements to view for choosing. There are some expected limitations, such as you can’t remove mandatory element tags like <title>, or <body>, etc. but this way, you can constrain to only the elements you really need on a regular basis. The reaction from the session chat feed went wild with excitement!

The key to specialization is copy, rename, and constrain an element. Find the base elements you need, define the extension for base elements, define the map element names to patterns, declare element patterns, and then define element content models, define element attributes, including your specialized domain. WYSIWYG tools hid the editing—in essence, the plug-in is hiding all the tech under the hood to make things easier. Jang also showed the graphical interface to help with this same decision making for constraints using blocks to designate different elements (looks like it could work with Wim’s example from yesterday).

This tool is available now in FrameMaker 2019.

You can check out more about this discussion of how to use DITA in smarter ways at SmartDITA.

Taxonomies, semantic tagging and structured content – better together!

How taxonomy management and semantic tagging helps you make the most of your DITA content

Adobe DITA World 2018, Day 3: Maura Moran

Maura Moran, Senior Content Consultant at Mekon, presented next in a session about how taxonomy management, semantic tagging helps you make the most of your DITA content.

Maura started out by explaining that non-structured content is often text that isn’t searchable, and difficult to repurpose and re-use. Any structure within the document indicated typographically, like with heading formatting, etc. Structured content, on the other hand, is made of meaningful, granular chunks, which gives content more flexibility and meaning.

Semantic content, however, is structured content in which the meaning is explicit. The benefits of using semantic content are that it’s complete and consistent, easier to author and read, re-useable at a granular level, so that granular content could be used in things like search engine snippets. Semantic content’s consistent, predictable answer structure helps reader comprehension, and can lend itself easily to multichannel output.

Adding metadata helps as well. Info about content, some in the structure and in the content, can be added to find and manage the content.

When adding semantic content and metadata together, the benefits increase. You get better searching and SEO, and granular content that’s easy to find and recombine. This leads to more reuse.

This is where taxonomy helps. Taxonomy is a controlled vocabulary—a rigid list of terms used in metadata for consistency. Taxonomy adds relationships to vocabularies by connecting terms to each other. This yields greater context and meaning, provides better searching; drives navigation such as pick lists, and improves personalization and customize outputs. You can use taxonomy to extend your tagging in DITA by identifying your concepts in a meaningful way by connecting them to real-world items. You can also easily connect to the same concept elsewhere in your organization or to your partners, or open data sources. Taxonomy supplements what you know about your concepts with additional information.

Maura used a recipe as an example of content where there were some issues if steps or ingredients were in the wrong order. The taxonomy is embedded in the authoring tool to ensure vocabulary and synonyms are synchronized.

Maura’s tips for managing metadata and taxonomy:

There are many taxonomy management tools to choose from. Use the one that meets your needs best. You should ensure that you’ve got a system that manages tags, as it probably does some taxonomy management. Specialist tax management offers the most sophisticated term linking and management, workflow, reporting, etc. Use a specialist tool if you want to manage tools in one place and share across systems.

Mekon developed a tool called Semantic Booster that extends AEM’s capabilities by connecting the PoolParty app (a taxonomy management tool) to AEM. It helps provide the power of enterprise taxonomy management combined with AEM tagging and extends AEM’s tagging capabilities.

During the Q&A session of the presentation, Maura suggested Protégé (as long as you use the pizza tutorial first) or Full Force Ontology for free or low-cost taxonomy management tools. The Finalyser product by SQUIBBS.de also has a plug-in for FrameMaker 2019 as well that can work.

Maura Moran wrapped up her talk encouraging attendees to consider becoming taxonomy experts, as they are needed more these days. She also recommended two books to read to learn more: The Accidental Taxonomist by Heather Hedden and Building Enterprise Taxonomies by Darin Stewart.

The Holy Trifecta of Content

Combining structure, terminology management, and translation to achieve success

Adobe DITA World 2018, Day 3: Val Swisher

The next session was a year in the making—but not for the reason you’d expect. It was supposed to be presented at DITA World 2017, but at the time that she was slotted, Val Swisher, CEO of Content Rules, Inc., was contending with wildfires around her home in California! Wisely, she dealt with her home at the time (fortunately, the fires spared her home), and was naturally invited back this year to present. I had seen this presentation done before in person a few years ago, and found that this was still as fresh now as it was then.

Val’s presentation discussed how combining structure, terminology management, and translation can help you achieve content success—a “Holy Trifecta of Global Content Success”!

Val explained that this happens when you intersect three technology areas, namely structured authoring, source terminology, and translation memory. Each is great on its own, but together the whole is equal to more than the sum of its parts.

In the beginning, writers created content, but it was unstructured—more like monolithic files that were cumbersome. Even worse, unstructured content can be expensive to translate. Every piece of content written gets reviewed, then translated in which the translation then gets reviewed, and THEN it’s published to the world. That’s not an efficient process.

Structured authoring is written in small chunks or topics that can be pulled together and reused, which in turn can produce more deliverables by way of writing once, but using many times. It can create consistency among deliverables and support multichannel publishing. By storing chunked content into a CCMS (a database containing all of these small chunks of content is organized using a taxonomy) to store, you can access the small chunks of information multiple times in multiple combinations to create new outputs.

In DITA, these chunks are known as topics, and they are tagged with metadata for searchability for internal and external use, stored in the CCMS, then pushed to a publishing engine using the topics provided by the CCMS. The content can be married to its format seamlessly using style sheets, yielding dynamic content outputs in a variety of different formats.

Writing becomes streamlined, which means the translation is also streamlined because you only have to review small bits rather than big parts.

However, as great as that is, there are still other problems to solve, especially with terminology. To put it in perspective, Val Swisher showed an image of a button that was labeled, “OK” on it. She asked the question of what do we use with an “OK” button? Do we click? Press? Tap? Select? Choose? It could be any of those, and this showed the justification of needed to manage your source terminology. She used another example asking how many terms are used for a “dog”? Hound? Puppy? Canine? Pooch? The key to source terminology management is: Pick one and stick to it. You can’t mix and match with mixed terminology. It gets confusing, especially if you end up translating it. She showed the view from a terminology management system (Acrolinx) that can help tag incorrect terms. (Another terminology management tool that was suggested in the attendee chat was Congree.)

Nobody looks at the style guide, in reality, because nobody has the time! The only way to enforce term management is with technology. People want to be able to do it on their own, but they don’t have the time. The trick is to constrain the words authors can use that are more important for enforcing. The same term must always be used.

The third part of this “trifecta” is translation memory. When content is sent to be translated, the translator needs to manipulate the content in some sort of interface where they view source language and target language on the screen side-by-side, known as the translation source on one side, and translation target on the other. Together, they form a translation unit called a segment (e.g., a sentence), which then goes into a database called a translation memory (TM). Creating TM results means you don’t have to translate that segment again, nor do you have to PAY to have it translated again unless you change the source because that would change the match. TM saves money and time and improves consistency. As technical communicators, we need still look at the source language from TM to ensure 100 % matching. There’s already a need to push already translated source segments from TM to the content creators. This is where the terminology management and TM databases will sync up to provide consistency between terminology and translations.

When structured authoring (write once, use many; translate once, use many), terminology management (say the same thing the same way, every time you say it) and Translation Memory come together, they form a “holy trifecta” for global content success.

Ask the Experts

An expert panel

Adobe DITA World 2018, Day 3: Tom Aldous, Bernard Aschwanden, Stefan Gentz, Matt Sullivan

As mentioned at the introduction to this blog post, one of the presenters had to drop out. In its place, an “Ask the Experts” panel of Tom Aldous, Bernard Aschwanden, Matt Sullivan and Stefan Gentz led an open Question & Answer session. The discussion touched on many topics very quickly, so hopefully, I was able to capture the highlights here.

Discussion started with a continued conversation from Val’s presentation about how all the pieces of localization, marketing, structured authoring starting to rolling together. DITA is now a method that can work with the localization process, as it can save so much money and pay for projects in the long run to use DITA.

Buying decisions are usually done by the person with check writing authority but often ends up in the “C” (executive) Suite. A director usually has to make a pitch at the VP level, but the VP can only authorize, as the directors have the budget. When comparing DITA in the “olden days” to now, functionality is greater, and there’s less effort to achieve it.

When asked about a low-cost CCMS, the first suggestion was to get a login for SharePoint online to configure their plug-in. DITAToo was also suggested and does quite a bit for a smaller CCMS, and starts as low as 720 EUR per user annually. It’s a great system to test out a CCMS, at a great entry level price point so that you can start.

Ultimately the decision for a CCMS, it’s about cost and scalability and how it’s used in your company. Scope out possibilities with any of the vendors. You never know, depending on your company’s needs; what kind of deals that vendors might be able to arrange. Make sure that you look at DITA supported systems and vendors might scale the pricing based on what you are actually using for different components. Pricing can vary depending on company or enterprise needs. Concentrate making DITA needs the most important component, as it will make it easier to convert content later. You won’t have to redo your content because it’s all in a standard (XML) and makes it seamless.

Another attendee asked, “What’s been the best ROI with one of your clients?Matt Sullivan said he had a client found that moving to a template process and transitioning content over reduced costs 80 %! Bernard Aschwanden helped with a translation project by talking to customers about which pages used and reworked content by removing irrelevant content. Within a year, the client had saved €1 million as a result with better content against the cost of him spending a week on site. Moving to structured content made it better, and instead of being a cost center, content became a revenue driver, because they could make custom books as needed. Matt Sullivan suggested that the best thing you can do for a client is looking towards automation processes to eliminate waste. Tom Aldous added that to time to market, and the number of people working on a project can make a difference. Tom said that with another government agency he worked with that produced health-related regulations, they need to generate content quickly. Before DITA, it was an MS Word workflow that would take 3–6 months, but moving to DITA reduced it significantly to half person’s work, and they publish within minutes—even with approvals—in less than a day. Extra people were now content experts who could help mine data better. Bernard says that instead of job elimination, it’s better services that increase with more deliverables quicker and more.

Another question asked was, “How can we expand DITA usage?” The consensus among the panel was that promoting the things like in AEM that non-technical communicators can use can help to show that DITA is just a part of a bigger solution that can help. Bernard and another colleague, Jacquie Samuels, wrote a white paper called, The Convergence of Technical and Marketing Communication which Stefan Gentz highly recommended.

A specific question directed to Tom Aldous was, “Should more US government agencies publish in DITA?” Tom’s answer was that the US General Services Administration or “GSA” was one of the early groups to try it out successfully, so it should be spreading over time. Using XML and DITA will not necessarily use full implementation, but a lot of the standard can be used across the board, even with “unique” standards.

Bernard Aschwanden talked about how simplified DITA is now the talk about Lightweight DITA, and how we need to figure out the focal areas that truly matter, and find what works best. Consistency is needed for extraction and migration. Lightweight DITA may be the entry-level standard to get more companies to adopt it and put it out to their authors, but they need the support.

The last part of the conversation was that many more millennials are open and receptive to technology, which helps in getting more younger people involved. Newer tech allows us to do more thinking about real content versus formatting issues and what it takes to format.

DITA 2.0: Strategies for your Content’s Future

Your content needs are changing – is DITA changing to keep up?

Adobe DITA World 2018, Day 3: Robert Anderson

As the last session after a very busy yet fruitful three days, Robert D. Anderson, who is the DITA Open Toolkit Architect at IBM, shared with the attendees what is going on with the future of DITA 2.0.

The future is now, or at least very, very soon in some cases! DITA 2.0 is the next standard that will be released to follow the current DITA 1.3 standard. Why not a DITA 1.4? The goals for a 2.0 version were to overhaul things enough to provide simplification, reduce complexity, remove unused features, redesign hard to use features, and where it was sensible, streamline things so that there’s only one way to do actions or functions.

Originally, DITA had certain features that ended up being dead baggage or technical debt, so migration is now required to help clean it up. The OASIS team is trying to make it as easy to improve. Every improvement proposal submitted must explicitly address migration details and plans, cost-benefit test for any incompatibility. As a result, expect a migration document from OASIS together with the DITA 2.0 specifications. It’s already been determined that DITA 2.0 will allow for backward compatible changes. Don’t need to prepare for it now, as it’ll be here in another couple years as it’s not quite ready yet.

Even so, destruction has already begun! Anything marked as “deprecated” or “should be removed” (because nobody uses them or they don’t function properly) will be eliminated. The team is also working on adding multimedia tags designed for compatibility with HTML5, and elements for audio and video, which is already part of Lightweight DITA.

Chunking attributes that were part of the initial DITA design were beautiful, but ultimately, they became ugly and got in the way. There were specification-defined items that people thought they knew what those tokens meant, but they actually didn’t know what they meant.

So, a new way of chunking will be using the terms “combine” or “split” depending on the desired action. Ignore other stuff. The team is also getting rid of some other elements or improving elements. For example, @outputclass is now a universal attribute. They want to make glossary entries more useful, like permitting super and subscript and allowing general-use phrase-like elements.

Steps will also be changing. A common complaint has been, “Why can’t I put steps inside steps?” With DITA 2.0, you will be able to replace <substeps> with <steps> and use as many steps as you want.

While the items mentioned so far are definitely going to be in DITA 2.0, this doesn’t mean that it’s a done deal. It’s still a work in progress, so there are still a few items that are likely to be put into it that are still in development.

Among those changes in progress include changes to bookmap to add key definition, more flexible, and an element for change list. There are also goals to loosen up attribute specialization and add a new element in glossary terms with highlighting. The OASIS team also wants to resolve old inconsistencies and to remove more unused tags.

Issues that still need some resolution include deciding if publication maps are more flexible than bookmaps, whether to use <em> or <strong>, whether to remove or redesign the @copy-to, allow images or object to vary, use <titlealts> in maps; make metadata improvements, and redo subject schemes.

Among the must-do items that need to be done regardless involve cleaning up the specification, remove duplications, create better and simpler organization, provide clearer and simpler conformance clauses, give clear and easy-to-compile normative rules, and provide good examples in element reference. Many of these are in progress or done.

What would YOU like to see? You can be part of the process by signing up on the DITA Technical Committee comment list on the OASIS website to comment and provide suggestions!

As for related work products, technical content will be a separate delivery as DITA Technical Content 2.0. It will troubleshoot the design update which was not fixable in 1.3 and update bookmap resources, while machinery tasks remain unclear at this point.

Learning and Training will also have a separate delivery, as that subcommittee is making updates to make a Learning and Training compatible version of DITA 2.0.

The highly anticipated Lightweight DITA—an initial XML variant based on DITA 1.3—is still in the works. The committee note completed, and the specification is in progress. Lightweight DITA is a great for entry approach to the standard for all, as it will benefit those who don’t need all the features, yet you can build upon it as you become more accustomed to it and choose to become a fuller user. The standard is geared towards engineers using Markdown, but it’s more for technical authors. This is why it’s going to be a standard.

Robert finished up by inviting attendees to read about DITA’s ugliest feature. He also welcomed any ideas that you’d like to suggest to him by contacting him on Twitter at @robander.

One additional note from this session from attendee David Nind in the session chat: the Lightweight DITA introduction note provides a great introduction to XDITA, HDITA, and MDITA.

Summary

And that’s a wrap! These three days of DITA World flew by! It’s interesting to see the themes that emerged this year from the event. Last year, it seemed it was all about getting ready for chatbot and AI. While those were still topics for this year, there seemed to be a greater emphasis on getting back to the basics of DITA, and how DITA—whatever the shape or form—can transform content into something magical when executed properly, and several of the sessions were about how to make the process even easier than before with the use of FrameMaker 2019 and the XML Documentation Add-on for Adobe Experience Manager—and beyond! The conference benefitted from case studies shared by those who have jumped into DITA transformations and inspired us on how we can move forward with this magnificent standard.

If you are looking forward already to next year’s DITA World 2019, you can already sign up! While the agenda hasn’t been determined for next year yet, it would not be any surprise if it was just as superior as it was this year and years before.

Thank you to all the attendees who signed up, and to all of those who attended live during the course of the three days! Recordings will be coming soon, so watch this space for more information!

Thanks to all the presenters, and especially to Stefan Gentz and Matt Sullivan for steering the event so smoothly, even when there were technical hiccups along the way. You made it look easy even when you admitted that it wasn’t!

I look forward to seeing you all—virtually—at DITA World 2019!