Reflections on LEAD: AI, Intent, Integrity & Trust

By Matt Bourn, Communications Director, Advertising Association

I shared my thoughts on stage at LEAD about the reputational opportunities and challenges that AI is bringing to advertising. We track the national conversation about advertising, and in the last 12 months, the volume of coverage linking advertising with AI has doubled. It’s now the third most covered topic in our sector, the majority about investment in AI and business models, but also stories of deepfakes, concerns over job losses and people’s sometimes negative reactions to brands experimenting with the use of AI in ads.

Towards the end of our panel, I asked the room a question via Slido: In the age of AI, will people trust advertising more or less? The response was telling; 85% of the audience said less. Reflecting back on that result, I guess it’s understandable. We’re living through a time where the rules are being developed and tested as we learn more and more about the technology.

I was fortunate to be discussing this with experts in the field – Alex Dalman from VCCP, Sean Betts from Omnicom Media Group, WPP’s Dr. Daniel Hulme, and Lloyds Banking Group’s Suresh Balaji.

They are all testing and learning what this technology will be capable of. For example, Suresh shared the results of a “Marketing Turing Test” that he conducted with WPP. They pitted a human-only team against an “AI + Human” team on a live brief. Suresh found the hybrid team didn’t just work faster; they produced better, more creative ideas. Daniel backed this up by explaining how WPP has trained a “creativity agent” on the work of past creatives. When tested on briefs, this agent came up with ideas that matched up with previous Cannes Lion-winning concepts. This didn’t mean, though, that it would come up with the next one. Alex noted that it can help visualise ideas instantly, getting stakeholders on board with bold ideas that might otherwise die in a slide deck.

Their views were clear: AI is here to help us do more. Faster. And better.

It’s important to remember that the technology itself is neutral. As Daniel pointed out, “AIs don’t have intent; human beings have intent”. For me, this goes right to the heart of the trust issue. He shared a thought experiment about a ride-hailing app. If an AI is optimised purely for profit, it might learn to charge you more when your battery is low because it correlates low battery with an increased need to secure that ride home, particularly if you’re on your own and it’s late at night. The AI doesn’t know it’s exploiting a vulnerability; it just sees a pattern.

This is where our responsibility lies. We can’t (currently) set these tools to ‘optimise’ for the brand we are working on. We must interrogate the intent behind the deployment. Are we using AI to be helpful, like O2’s “Daisy” (the AI granny wasting scammers’ time), or are we using it to exploit weaknesses? Much of this points back to a bigger question – what do you want your brand to be trusted for?

Sean talked more about this period of transition that we are living through. Right now, AI platforms are becoming the “front door to the internet,” he said, influencing how people discover brands and make decisions. Yet, these systems are black boxes in his view and we currently lack transparency on how Large Language Models (LLMs) associate brands with concepts.

There was strong agreement on the panel regarding labelling. Alex and I shared a general hesitation about “humans that aren’t real humans” in ads—it feels “icky”. But Suresh challenged us to look at the bigger picture of the customer experience. If the service is seamless, will customers care if the image was generated? Perhaps not. But they will care if they feel deceived. Transparency isn’t just a regulatory requirement; it’s fundamental if advertising generated by AI is to be trusted.

Ultimately, in a world where digital interactions are morphing into something that takes place machine-to-machine on our behalf, I believe the premium on human trust will skyrocket. Suresh noted that while Lloyds is the biggest digital banking group, his primary goal with the latest brand platform is to build people’s trust and confidence in the brand. In a complex category like finance, people might use digital tools for convenience, but they rely on brand reputation for security.

The audience vote proved that we still have much work to do to convince the wider industry – and the public – that AI will help to build trust levels in advertising, and brands more generally. We can’t just move fast and break things when trust in our work is at stake.

Which is why, during the session, I was pleased to be able to wave a physical copy of our new Best Practice Guide for the Responsible Use of Generative AI, which the AA has helped to develop under the Government’s Online Advertising Taskforce.

I urge every member to download it today. It covers the essential guardrails we need – from transparency and data ethics to understanding the carbon footprint of our AI use, an area of focus for work by AdGreen and Ad Net Zero.

If we can combine these ethical principles with the development of the ‘superpowers’ we heard about on stage, we can realise the full potential of this new technology. Along with all the positive benefits it can bring to the advertising industry and the wider economy.

Download the Best Practice Guide for the Responsible Use of Generative AI in Advertising here.

For further information please contact: