March 2024

How Brands Can Tackle Political Misinformation in an Age of Generative AI | Marketing Gazette

How brands can tackle political misinformation in an age of generative AI

Emma Lacey, SVP EMEA at Zefr, explores how regulations and technolgy are helping businesses stifle AI-generated political misinformation.

A man holding a burning newspaper.


With 65% of the global population actively using the internet — more than half of which receive their news primarily from social media — online information plays an increasingly pivotal role in people’s education and decision-making, including about politics.

However, generative AI is being used to propagate misinformation online, calling into question the reliability of much politically-focused content and misleading users. Examples of this phenomenon include fake images of Donald Trump being arrested and a fake video of Joe Biden making offensive remarks, both of which rapidly attracted impressions online.

According to recent global research from IPSOS, 87% of users are concerned about the impact of misinformation on elections in their country, with 47% being very concerned”. What’s more, almost 90% of consumers agree that it is a brand’s responsibility to ensure their ads are appearing in suitable, reliable environments. Therefore, brands whose ads appear next to content containing political misinformation risk losing significant trust and respect from consumers.

With the 2024 UK and US elections fast approaching, how can brands best equip themselves to safely navigate an online landscape replete with AI-generated political misinformation — and even better, help stifle its spread?

The developing dangers of generative AI

While there are many positive use cases of AI as a technology, concerns around generative AI’s capabilities in creating misleading and harmful political content are widespread. Almost 90% of people worldwide believe fake news online has already harmed their country’s politics. British intelligence agencies have further warned that AI could pose significant risks to democracy by 2025. Similarly, 64% of election officials in the US report that their jobs are made more dangerous by the spread of false information.

There are various generative AI technologies that can be used by bad actors to propagate online misinformation in different ways.

Deepfakes are one application of generative AI that can create realistic videos and images of politicians doing things they never did. The UK government specifically lists deepfakes as a major risk ahead of the upcoming General Election, while Sam Altman, CEO of OpenAI, has warned about their potential to manipulate people through providing persuasive disinformation.

Voice replication technology is another application of generative AI that can create audio clips of politicians saying things they never said. An example of this technology is Microsoft’s new Vall‑E model, which can replicate anyone’s voice with just three seconds of provided audio. Voice replication technology can become particularly dangerous when combined with deepfakes and lip-syncing technology, creating fake clips with matching audio and video.

Preparing to navigate political misinformation

As AI threatens to continue increasing political misinformation online — with political AI content already garnering over a billion views across social platforms and politically-focused misinformation growing by 129.6% quarter on quarter — a combination of industry-wide standards, government interventions, and technological solutions must be implemented to tackle its impacts.

This is particularly important for brands that want to avoid damaging their reputation and customer relationships and inadvertently funding distributors of political misinformation through their ad spend.

A key way brands can ensure their ads are placed away from political misinformation online is through aligning with industry-wide standards that unify advertisers and publishers on the latest content safety classifications. One notable standard has been laid out by the Global Alliance for Responsible Media (GARM), which helps marketers identify and demonetise sources of online misinformation throughout their media planning, and instead direct their advertising budgets towards brand-safe, reliable publishers.

Government interventions can also play an important role in the fight against AI-generated political misinformation. In 2023, Rishi Sunak launched a £100m UK taskforce with the aim of progressing AI safely with guard rails’ in place. This venture came as part of the UK government’s mission to implement a pro-innovation approach to AI regulation, ensuring prosperous growth of the technology while curbing its potential for misuse.

Furthermore, brand safety solutions powered by discriminative AI are ever-more effective at identifying AI-generated political misinformation. These innovations are increasingly capable of examining the context and intent of dynamic forms of content, including video, to determine whether they are AI-generated or have an intention to cause harm or mislead. As discriminative and contextual AI solutions are still developing, their efficacy at detecting the complex nuances of online content can be further amplified when they are implemented in unison with training from skilled human moderators.

Tackling AI-generated political misinformation online requires a comprehensive, industry-wide effort. While social platforms and governments have a role to play, brands must take the lead in identifying sources of online misinformation by embracing the latest brand safety solutions, to ensure their advertising budgets are not contributing towards the problem. By following this approach, brands can prevent negative impacts to their customers and public image, while helping to maintain healthy, fact-based political discourse online.


Full Story