Rebalancing trust in favour of AI’s potential
The world of generative AI continues to move at breakneck speed. Experimental projects are increasingly emerging from the testing lab and moving into production, and global consumer audiences are interacting with AI-generated content en mass.
However, this has also brought trust in generative AI and its outputs into the public consciousness. The potential for unethical approaches, harmful bias and misinformation is sparking real-world and speculative debates about what the future holds for businesses, content consumers, policymakers, and humanity itself.
I was pleased to host a panel of technology and governance experts at Adobe’s Best of Summit Sydney event to discuss responsible innovation in the AI era. They agreed that despite the shifting regulatory environment, there is an expectation for all — brands, educators, creators and government — to be transparent and united against the tide of misinformation.
So, the question remains. How can we deliver on the promise of generative AI — productivity, creativity, innovation — while mitigating the risks? This is precisely what the Australian Government is trying to address as it contemplates possible new AI policies. We must absolutely rebalance consumer trust in favour of AI’s valuable potential — or Australia will be at risk of being left behind the rest of the world.
In essence, everyone should be getting ready to take collective action.
Fixing issues at the source
It’s fair to say consumers are on high alert about AI-generated content — even those that are already finding AI useful in their daily lives. Adobe research confirms people are worried about the trustworthiness of content they’re seeing, and that it’s becoming difficult to discern real from fake.
While this is prompting consumer interest in tools that verify the authenticity of digital content, the day-today users of generative AI are also seeking clearer guidance, particularly in the workplace. As Ed Husic, Minister for Science and Industry has said, if the guardrails aren’t there, we’re not going to see the benefits.
Boosting AI governance and transparency
On the panel, professor Aaron Quigley, Deputy director and Science director of CSIRO’s Data61, described the current workplace usage of generative AI as the “wild west”, where many people are exploring and have few guidelines to follow. At the same time, organisations are moving headlong into deploying generative AI in live settings.
As companies develop their internal and external policies, Aaron offers three considerations. First, have conversations around what Generative AI guidelines mean and how they will be embedded into the organisation. Second, be open about Generative AI usage. Finally, consider how easy it is for their customers to see that they are using generative AI.
“You’re building a trusted relationship with your customer, and if you breach that trust because they don’t know that you’re using AI, it’s going to be hard to win that back.”
-Professor Aaron Quigley, Deputy Director and Science Director, CSIRO’s Data61
Organisations embrace the unknown
Despite varied progress, many organisations globally are opening up conversations about AI within the workplace. Many are adopting internal principles and governance frameworks that enshrine a clear set of ethical principles from accountability to reliability and safety. Aaron says this can help guide organisations on their adoption journeys and avoid a ‘shock to the system’ when regulation is introduced.
When discussing the topic of readiness in the era of AI on the panel, Adobe’s Global Head of Industry Strategy, Julie Hoffman, shared a surprising insight from her global research into organisational agility. Julie noted that as organisations evolve their operating models to prepare for AI, “brands are not afraid to acknowledge the gaps in their knowledge about AI” and are therefore driving investment in specialist personnel and skills.
Increasingly, brands are transitioning their operating models from decentralised to centralised, moving towards a more federated format where these brands have now institutionalised the knowledge around AI. “During this interim process,” she explained, “they add 50 to 300 personnel resources between internal talent and mostly external consultants to assist through that transition.”
Instilling trust as elections loom
The issue of trust in AI-generated content continues to be put to a significant test. Globally, more voters than ever are heading to the polls, and without widespread tools to verify whether the online content they consume is real, consumer trust could be lost in an era where citizen trust in government is not exactly at its pinnacle.
Encouragingly, more organisations are providing users with the tools to check for themselves. This includes the Content Authenticity Initiative, a global coalition of more than 3,000 members working together to add transparency to digital content via tamper-evident metadata called Content Credentials.
Alongside these voluntary initiatives, Ryan Black, former head of Public Policy at Technology Council of Australia, emphasised that organisations should remember that many issues of concern surrounding AI are already regulated.
Adopting technologies that provide confidence around content integrity will be crucial to the election process and how consumers navigate content in general. In this way, Content Credentials may not be the “silver bullet” but rather it is foundational to letting consumers see the truth. It can also be a cornerstone of media literacy and trust-building among consumers.
As businesses seek to establish good governance and advance their AI operating models, it’s crucial that they also embrace the responsibility of educating consumers and providing them with tools to understand what is true. As Aaron said, “If we don’t get this part right, I think it could potentially undermine trust in AI more broadly and remove the social license.”