Latest News
Brand Safety in Walled Gardens with Zefr’s Emma Lacey
By Eda Osman, Marketing Manager at Sphere
On today’s episode, we’re joined by Emma Lacey, Senior Vice President at Zefr — a frontrunner in responsible, brand-safe marketing that enables transparent, content-level targeting and measurement solutions across walled garden platforms.As online misinformation becomes increasingly complex and difficult to identify, track, and eliminate, Emma shines a light on Zefr’s developing technologies and approaches that are making headway; enabling brand suitability and safety within walled garden environments, such as Meta, YouTube, and TikTok.
Achieving transparency in a complex environment
Zefr helps brands and agencies navigate walled garden platforms — and the plethora of user-generated content circulating on them — to advertise transparently, safely, and suitably.
Emma highlights that the key to this is creating more robust advertising campaign strategies that intelligently leverage data to refine the type of content your brand is adjacent to.
“You’re making sure that your brand is being seen in the right places; engaging the right audiences that are in the right contexts, and ultimately, that your suitability preferences are constantly being adhered to,” she says. “And if anything doesn’t quite go to plan, that you at least have consciousness of it, and you’re able to act on it.”
Matching up to industry standards
Emma points out that Zefr is fully aligned with the Global Alliance for Responsible Media (GARM), helping provide unified brand suitability across different platforms. She discusses that Zefr uses 12 categories of brand suitability and different risk rankings to determine an effective brand suitability strategy in line with a brand’s own preferences.
“It’s really important to have industry standards — not standards that we’re making up — we want to make sure that you have that consistency,” she says. “And that as a brand, you know that you’re actually talking the right language.”
Tackling video misinformation requires an innovative approach
Online misinformation is constantly evolving, and when it appears in video form it can have additional nuances, which make it difficult to detect. The accessibility of generative AI applications also means bad players can more easily create unsafe content such as deepfakes.
While AI is currently an industry buzzword, Emma points out that Zefr has long used bespoke AI solutions in an innovative way to detect misinformation even in long and complex video formats.
“What we do is use human moderation to train our AI. And that’s a real differentiator for us as a business.”
She adds that this approach forms the basis of a predictive model, which, when integrated with GARM and leading fact-checking organisations worldwide, predicts future misinformation trends to enhance Zefr’s misinformation detection capabilities.
Election season is set to supercharge misinformation
With the US presidential election and UK general election fast approaching, it’s essential that legitimate and authentic content sharing can take place online without fake news or misinformation distracting social media users’ attention.
Emma emphasizes that this is crucial for people’s decision making, and young people will be especially vulnerable to the allures of online misinformation.
“Being able to give people on the platforms freedom of speech, for them to be able to have a community within these environments; that’s really important,” she comments. “While also making sure that brands aren’t funding the bad stuff, or being adjacent to things that they are not comfortable being adjacent to.”
Brands’ advertising budgets can contribute to the problem
In the online media landscape, brands’ advertising budgets are often allocated through programmatic means, without knowing exactly which publishers or content creators they are funding.
Emma underlines that it’s vital that your ad dollars aren’t funding content that you are not happy to be adjacent to; in turn damaging your reputation and strengthening those you don’t align with.
“You need to make sure that your ad dollars are doing what you intended them to do. So the first thing is having that level of transparency, that’s incredibly important. The second thing is then, what controls can you put into place?”
Brands must work alongside platforms to stifle misinformation
Emma encourages brands to consider their own unique risk levels when advertising online.
How risky a type of content is for a brand depends on what the brand does and which audiences it’s trying to reach. For example, a video game company promoting an action-based game, which might contain violence, might not have a problem appearing next to content involving weapons; but a clothing brand might.
“The scale of this problem is only going to get bigger. The platforms really do a great job at keeping the really egregious content away from brands. But it is the subtlety of your brand preferences that you want to make sure you’ve got control over.” she concludes.