Exclusives

Adobe: AI, Machine Learning Can Enrich Your Content

Metadata is a crucial component of media and entertainment companies’ content strategies but it is a component that all too often relies on manual intervention from content creators, according to Chad Dybdahl, senior solutions consultant at Adobe.

Adobe’s Sensei platform, however, enables organizations to leverage artificial learning (AI) and machine learning (ML) to automatically tag and categorize DITA XML-based content, enriching the end-user experience, and ultimately ensuring that customers can find the content they need, he said April 13 during the Adobe webinar “Don’t search, find: Enrich your content with machine learning and artificial intelligence.”

“Metadata is, if you’re unaware, information about the information,” he pointed out to the uninitiated. “It can become very granular when we start talking about structured content management and things like topic-based authoring, where we want to surface a particular piece of content that might be relevant for a certain screen in an application, for example, that you’re documenting or a certain function of a product” that you are providing documentation for, he said.

Metadata is “a way of telling the customer ultimately – your audience – more about the information that they’re about to consume,” he noted.

“As a consumer, which we all are, this is something you use all the time,” on a daily basis when doing all kinds of things, he pointed out. “Metadata touches many facets of our online existence, particularly when we start thinking about entertainment” and online shopping, when recommendations are made for you based on prior purchases, he said, noting “that’s all powered by metadata.”

When using streaming music services, whenever you search for a specific song or artist or album or genre or what year it was released in, “those are all examples of metadata,” he said. It’s all information about the information and you use it to help you find the content you want to consume and music services make recommendations based on what other music you have listened to, he said, noting it’s all driven by metadata “right behind the scenes.” Movie streaming services including Netflix are metadata-driven also, he noted.

When using these services, “I don’t want to search – that’s not fun” when trying to decide on a movie to watch or music to listen to or products to buy, he said, explaining searching is not the objective: “My objective is to find things.” You “search” for lost pets or missing socks or things in junk drawers, where there is no organizational strategy, he said.

An organization’s website should not be like a junk drawer, he said, noting visitors should be able to find PDF and other content there.

If they haven’t already done so, media and entertainment organizations should be implementing a metadata strategy to drive the findability of their content and make it more accessible to their audience, he said.

But he warned that one “really common problem” is the inconsistent application of metadata, he told viewers. Consistency means that every piece of content has been tagged and enriched with metadata and also has it been done in a way that is consistent from one piece of content to another, he explained.

How Sensei Can Help

Adobe Sensei “really drives a lot of digital experiences already,” he went on to say, noting it initially started with the classification and tagging of images.

Using and leveraging AI and ML to enrich technical content “without human intervention” can significantly improve search results, he said and went on to provide demonstrations to viewers.

During the Q&A, he noted that natural language processing is “a core feature of Sensei.

The ability to use technical content to “feed a chatbot” is “certainly something that could be achieved with Sensei,” he noted. However, it’s not yet part of XML Documentation for Adobe Experience Manager, which provides structured content management for experience-driven documentation.

It’s also “a little early, I think, for us on the XML product side for machine learning,” he told viewers. However, “we do have one or two customers doing a beta of this now and it is going to be rolled into the product, I believe, on our roadmap coming up here in the next year or so – but” Adobe is not yet at “the point of case studies with those customers just yet although, as you might imagine, we are keen to do that,” he said, adding: “Stay tuned.”

What will be the “next level” is “when we start thinking about helping content creators to author content [and] suggesting content that might exist in the system already,” he said.

He concluded: “It’s really, I think, going to become a very powerful way to really streamline authoring workflows [and] really facilitate the duplication of content and also just help you get the content out the door faster in a much more consistent way by allowing machine learning and artificial intelligence to help you find content that you might not yet be aware of.”