November 2023

Preserving Brand Authenticity Amid Generative-AI Powered Misinformation | ANA Industry Insights November 2023

Preserving Brand Authenticity Amid Generative-AI Powered Misinformation

November 6, 2023

By Or Levi

Lately, it seems, misinformation is as pervasive as the web devices we use; generative AI has become both a facilitator and a combatant in the ongoing information wars. As the U.S. gears up for the 2024 elections, the risk of AI-fueled misinformation looms larger than ever.

While not new, misinformation is changing with advancements in technology. In marketing, generative AI is viewed as both opportunity and threat. Brands today may find themselves on a slippery ethical slope, with deepfake technology rapidly pushing the boundaries of what’s acceptable.

A New Landscape of Risk

In 2017, a deep fake video of Barack Obama reverberated, marking what many considered then as the future of fake news.” In 2023, the terrain has become even more fraught, technologically and ethically. Generative AI has enabled an alarming rise in misleading or inappropriate content. Ranging from divisive commentary to false narratives, it often gets substantial viewership across social platforms. F

Columbia University researchers point out that while malicious image manipulation has always been possible through tools like Photoshop, generative AI has lowered the barrier to entry and reduced the costs, making it easier than ever to create deceptive content. It used to be much easier to know it when I see it.” Now, the spectrum of content types and the subtle expressions of generative AI have made it incredibly challenging for traditional detection methods to accurately capture all instances of brand risk. Violent imagery, or other forms of unsuitable rhetoric, can be seamlessly integrated into what appears as benign content.

For instance, according to internal research, Zefr’s tech has identified over 300 million video views classified as misinformation that have been AI-created. The boundaries between suitable and unsuitable content will become increasingly blurred, so brands must navigate with ethical and social responsibility at the forefront.

How AI-Generated Content Is Impacting Brands Amid the Misinformation Era

Generative AI has made tools once reserved for the technically sophisticated available to the general public. Virtually anyone with a smartphone and internet access can churn out anything from viral song covers to eerily accurate impersonations of political figures. Zefr technology has identified a massive uptick in AI-generated content views across social platforms. Over 1.5 billion views this year alone are tied to content featuring AI-generated presidential voices, for example. All verticals from retail, to healthcare, to gaming are on notice: this isn’t a fad, but a seismic shift.

This democratization of AI tools equates to huge creative opportunities, but it also opens Pandora’s box for misinformation/​disinformation. At a time when public trust is already fragile, the use of AI-generated content by malicious actors poses a clear and present danger to brand integrity. Imagine an AI-generated audio clip impersonating your brand spokesperson, spewing false information. Or consider the copyright implications if your brand’s IP becomes part of a viral deceptive narrative. For sectors like finance, healthcare, or news, where veracity is paramount, the stakes couldn’t be higher.

Brands have no choice but to evolve their strategies. It’s no longer enough to check a detection and monitoring box” and assume your ad placements are safe and suitable. Automated, real-time monitoring, coupled with human oversight should be the new normal. Taking proactive steps to safeguard brand voice and intellectual property are non-negotiable.

The Paradox of Open Platform Freedoms

Freedom of speech and expression is the lifeblood of social platforms, powering thriving ecosystems for advertisers, consumers, and new ideas to flourish. This democratic openness is also their Achilles’ heel – what allows the platforms to thrive is also what makes it enormously complex to govern them, especially for advertisers. The shortcomings of traditional methods implemented by the platforms to curb misinformation are becoming more glaring, and the rise of AI-generated falsehoods only adds fuel to the fire.

Enter large language models (LLMs) – agile tools designed to analyze context and tone across sprawling data sets. They’ve emerged as tools for content moderation, but here’s the catch: LLMs tend to hallucinate, and they’ve proven to have a really tricky time discerning misinformation from fact. The propensity for LLMs in generating false positives or negatives make them a shaky ally in the war against disinformation.

Combining LLMs with human judgment may create more robust content filtering capabilities, but make no mistake: They’re not a cure-all. The complexities of misinformation demand a more nuanced, technical approach, and LLMs alone won’t enable the ability to leapfrog straight into the future of effective content moderation on platforms.

Nurturing Critical Discourse

In a world where the line between fact and fiction is increasingly blurred, brands have the power to equip society with the tools to recognize and combat misinformation. Misinformation is consistent, especially during elections. But if we can introduce new technologies that enhance media literacy and foster critical dialogue, we have a fighting chance to navigate the labyrinth of the misinformation age.

Ultimately, the moral and ethical dimensions of generative AI usage require serious contemplation. For marketers, the call to action is clear: be proactive, be ethical, and most importantly, be prepared for a constantly changing landscape.

The views and opinions expressed are solely those of the contributor and do not necessarily reflect the official position of the ANA or imply endorsement from the ANA.


Or Levi is Zefr’s VP of data science.

Full Story