Latest News
Why the Evolution of Deepfakes is a Wake-Up Call for Brands | Advertisingweek July 2024
By Emma Lacey, SVP EMEA, Zefr
Deepfakes — AI-generated videos that can convincingly depict people saying or doing things they never actually said or did — are becoming increasingly sophisticated and harder to identify. No longer just technological curiosities, deepfakes now have the potential to be used as tools for deception.
Their use may range from benign entertainment to malicious misinformation campaigns, identity theft, and phishing scams. As the technology advances, strengthened by rapidly evolving voice replication programs like Vall‑E, the ability to create highly realistic fake videos becomes accessible to a broader audience, amplifying the risk of misuse.
Famous instances of deepfakes have included fabricated images showing Donald Trump’s arrest and a manipulated video portraying Joe Biden making controversial statements, both of which quickly garnered widespread attention online. Consequently, UK authorities have flagged deepfakes as a top AI-related crime threat, highlighting their potential to erode trust in digital media, spread false information, and influence public opinion on critical issues such as elections.
While these digital forgeries pose a severe challenge to everyone online, they can be especially troublesome for brands advertising on social media platforms.
The impact of deepfakes on brands
Research by MAGNA and Zefr indicates that when ads appear alongside misinformation, consumer trust in the associated brands diminishes. This erosion of credibility and reputation can have long-lasting effects, harming consumer relationships and brand equity. Additionally, nearly 90% of consumers believe it is a brand’s duty to guarantee their advertisements are displayed in appropriate, trustworthy settings.
This is why brands must be vigilant about where their ads are placed, ensuring they do not inadvertently support or become associated with misleading or harmful deepfake content. Fortunately, as deepfake technology advances and becomes more sophisticated, so do the tools to identify and tackle it, and there are now a wide range of options that make this vigilance possible.
To safeguard media placements and maintain consumer trust, brands need to adopt comprehensive strategies that leverage emerging technologies and align with industry standards and government regulations.
How to deal with deepfakes
Primarily, brands should incorporate robust fact-checking solutions into their campaign strategies. Partnering with organisations that specialise in detecting misinformation and verifying content can help ensure ads are not placed next to false or deceptive information. Fact-checking tools can analyse the context, intent, and content of media placements, flagging potential deepfakes and other misleading content before ads are published.
Additionally, emerging technologies such as AI and machine learning offer powerful tools for identifying and mitigating the risks of deepfakes. Advanced discriminative AI algorithms, trained on the latest official definitions of misinformation, can analyse video content to detect anomalies indicative of fake content. These technologies can scrutinise nuanced aspects such as facial movements, voice patterns, and background inconsistencies, providing a higher level of scrutiny than traditional methods.
Guidelines, industry standards and education
Developing and adhering to strict brand suitability criteria is also crucial. Brands must define clear guidelines about the types of content and contexts that align with their values and message. By setting high standards for media placements, brands can avoid association with harmful or misleading content. To ensure these criteria are the most up to date, brands can leverage guidelines set out by leading organisations such as the Global Alliance for Responsible Media (GARM); identifying and defunding deepfakes during their media strategy while redirecting advertising funds towards trustworthy, brand-safe publishers.
By participating in industry-wide initiatives such as GARM, brands can also contribute to a collective effort to create a safer digital environment, helping in the development of standardised tools and protocols for identifying deepfakes and other forms of misinformation.
Moreover, an informed consumer base is a crucial line of defence against the impact of deepfakes. Brands should invest in educational campaigns that raise awareness about deepfakes and the importance of critical evaluation of digital content. By empowering consumers with knowledge, brands can help mitigate the influence of misinformation and foster a more discerning public.
The challenge of deepfakes is multifaceted, therefore requiring a proactive and similarly multifaceted response from brands. By integrating AI-powered fact-checking solutions, adhering to industry-wide brand safety and suitability criteria, and educating consumers, brands can safeguard their media placements against deepfakes. This comprehensive approach not only protects consumer trust and brand reputation but also contributes to the broader goal of maintaining a credible and trustworthy digital ecosystem for all.