Restoring trust in AI-generated content requires many hands

5 people sitting on stools for a panel discussion

The rapid increase in digital content creation, sharing and publishing in the age of generative AI raises important questions for businesses and consumers. Can people tell when they encounter AI-generated content and, more importantly, misinformation? How do organisations and content creators ensure they are innovating responsibly when harnessing the benefits of AI?

The implications of both these questions are far-reaching. After all, the rise of AI and the threat of misinformation can have a bearing on political outcomes, brand reputation, and the nature of human creativity.

Adobe’s Chandra Sinnathamby went to the heart of the issue at the recent MAKE IT event. “In the AI era, trust is the number one factor that you've got to drive,” Chandra said.

Quote from Chandra Sinnathamby: “It's really about how can we deliver on the promise of AI, which is around creativity, innovation, and productivity, but at the same time mitigate the risks that it presents typically around bias, unethical approaches and misuse.”

There is also some way to go to restore trust. Adobe’s State of Digital Customer Experience research recently found that 52% of consumers believe that due to AI, they will receive misleading or incorrect information.

Encouragingly, the desire to increase the transparency of AI-generated content and stem the tide of misinformation is growing. This includes the development of open standards that help pinpoint where content has come from and provide tamper-proof provenance.

As outlined by Adobe’s Andy Parsons, this is reflected by the growing membership of the Content Authenticity Initiative (CAI), now at 3,000 members globally. Further, the Coalition for Content Provenance and Authenticity (C2PA) is gaining momentum.

Against this backdrop, a panel of marketers, artists, academics, and technologists came together at the MAKE IT event. Here’s what they had to say about the impact of AI-generated content across domains and the pathways to improved transparency and trust.

The media literacy imperative

As Chandra highlighted, Adobe research showed that 67% of consumers in Australia and New Zealand now expect brands to disclose when they’re sharing AI-generated content.

Tanya Notley, Associate Professor in Communication at Western Sydney University and Co-founder of the Australian Media Literacy Alliance, said that most Australians are not confident in spotting misinformation online. Her research also shows very few adults have been educated on media literacy.

On the other hand, Tanya believes that generative AI can assist people from various backgrounds to become more media literate, saying, “The opportunities are there; they're not exploited and explored.”

Democratising creative skills

Brands are embracing generative AI tools, striving for productivity and streamlining decision-making processes, as Stephen Foxworthy, AI Strategist at Time Under Tension, noted.

However, rather than viewing generative AI through the lens of “getting things done, creating actions,” Stephen says he thinks of the user as a director and the tool an actor. In this way, more people can tap into creative skills they otherwise wouldn’t have. However, rising adoption also brings the associated brand and reputation risk to the forefront.

Tackling transparency and atrophy

Nicholas Davis, Industry Professor, Emerging Technology at UTS, sees great promise in the uptake of content credentials, where creators use tamper-evident metadata, so a viewer knows how content was created or edited.

Nicholas says it “is the biggest movement in the market to allow people to do the right thing. It’s adding that layer of information that makes the market much more efficient in terms of knowing when an artist has been ripped off versus not.”

However, Nicholas also expressed concern about the prospect of diminishing skills as people use AI in place of learning, particularly given the important role of human oversight in quality and risk control.

Impact on creativity

While Adobe research shows 66% of creatives say generative AI will be good for their careers, artist and Adobe Ambassador, Kitiya Palaskis, says artists are still working out where they sit in the world of generative AI.

While she emphasises the responsibility of artists and creators when using generative AI tools, she also strongly believes in protecting artistic IP. Kitiya was also quick to note that the tools can also be an extension of individual artistic expression.

“If I'm the one typing the prompts into an AI generative tool, it's still going to be really, really original to me and my aesthetic, my design style, the way that I work,” Kitiya said.

Generative AI tools have significantly changed how digital content is created and consumed, and adoption is only set to accelerate. That brings the need for media literacy and responsible innovation into sharper focus.

Given the wide and varied impact on industries and society more broadly, it takes a collaborative effort from brands, educators, creators, and government to ensure content trust and transparency is maintained in the age of AI.