The UK’s 2024 general election will be historic for at least one reason. For the first time, AI is playing a major role in election communications and marketing. At the last general election, just five years ago, this wasn’t a consideration – the technology simply wasn’t there.
A lot has changed since 2019 and it’s sensible to expect the next few years to usher in a lot more change, too. So, in what ways is AI being used in the current election campaign, and what lessons can businesses take from this?
Not long after the general election date was confirmed and the campaign officially opened, the Alan Turing Institute issued a grim warning about the potential of AI to mislead the public and erode confidence in the integrity of the electoral process. The full research paper can be seen here, and admittedly is based on limited data, but there’s no denying the potential issue posed by AI-enabled content – something that businesses should be aware of as much as politicians.
The study showed that 16.9% (19) of the 112 national elections held in the UK since January 2023 showed evidence of AI interference. A minority, yes, but certainly cause for caution. AI generated deep fake audio clips and videos of Sadiq Khan and Kier Starmer were doing the rounds on social media even before the campaign was announced. The threat is being taken seriously enough by the government that official guidance was issued for electoral candidates on 5th June.
Generative AI will almost certainly be used extensively in this general election campaign, which opens the door to risks such as fake campaign endorsements, deep fakes designed to discredit political figures, AI generated misinformation about how and where to vote, and even allegations of fraud to damage electoral integrity. Aside from the security implications of these risks, content of this nature could create confusion among the public over which communications are real and which ones are not.
The researchers have suggested clear guidelines about how AI content is used in the run-up to the election, including the need to clearly mark AI created and enabled content as such, and clarity about the sources used to create AI content.
A more positive, but equally novel use of AI comes from Brighton Pavilion electoral candidate Steve Endacott, who is putting himself forward as the UK’s first AI MP. Mr Endacott – who is standing as an independent candidate – has created an AI avatar – called ‘AI Steve’ – to answer questions and address queries by constituents, using a type of chatbot technology. This has been reported as an eccentric story in the press, but could foreshadow ways that businesses can use AI avatars and customer service reps to engage with customers on social media, in virtual meetings, and through their website in the near future.
In the world of business marketing, how do SMEs counter the potential confusion and reputational risk posed by AI to their brand? The first defence is straightforward transparency. It makes sense to clearly communicate how AI is used in content creation, so that customers know when they are interacting with AI generated content. This doesn’t close the door to using AI content but gives customers greater choice and reinforces trust in the business.
Consistency and values are another way of bolstering the authenticity of your content. Whether human written, curated, or AI generated, all content published on any channel must be consistent with the style, USP, and ethical values of your brand. Staying true to your core brand values and reflecting them in all your marketing collateral will reinforce your relationship with your prospects and cultivate greater trust and authenticity.
We will be watching the general election campaign closely over the next few weeks and will report on any interesting trends that could be relevant to your marketing campaigns. In the meantime, please get in touch with any questions about digital inbound marketing and find out how we can support you to grow your business.
Image Source: Canva